<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Neural Networks on Simia Cryptus Software</title>
    <link>http://blog.simiacryptus.com/tags/neural-networks/</link>
    <description>Recent content in Neural Networks on Simia Cryptus Software</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Sun, 26 Apr 2020 00:00:00 +0000</lastBuildDate>
    
	<atom:link href="http://blog.simiacryptus.com/tags/neural-networks/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>DeepArtist.org</title>
      <link>http://blog.simiacryptus.com/projects/deepartist/</link>
      <pubDate>Sun, 26 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/projects/deepartist/</guid>
      <description>&lt;p&gt;DeepArtist.org is intended to be the umbrella for artistic-themed applications built using MindsEye. This site currently consists of &lt;a href=&#34;http://examples.deepartist.org/&#34;&gt;examples.deepartist.org&lt;/a&gt; which displays example notebooks provided for this project.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/SimiaCryptus/deepartist.org/tree/master&#34;&gt;github://deepartist.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/SimiaCryptus/examples.deepartist.org/tree/master&#34;&gt;github://examples.deepartist.org&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&#34;http://code.simiacrypt.us/release/2.0.0/deepartist/index.html&#34;&gt;Release 2.0.0&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
    </item>
    
    <item>
      <title>MindsEye 2.0</title>
      <link>http://blog.simiacryptus.com/projects/mindseye/</link>
      <pubDate>Sun, 26 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/projects/mindseye/</guid>
      <description>&lt;p&gt;MindsEye is an AI framework built using Java. It uses reference counting for efficient resource use, and uses libraries such as CuDNN (CUDA) and Aparapi (OpenCL) to do numerical heavy lifting. It provides a highly customizable optimization library, and a wide variety of pre-coded layers.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/SimiaCryptus/all-projects/tree/master/mindseye&#34;&gt;&lt;strong&gt;GitHub URLs&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&#34;http://code.simiacrypt.us/release/2.0.0/mindseye/index.html&#34;&gt;Release 2.0.0&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
    </item>
    
    <item>
      <title>DeepArtist.org Release 1.0</title>
      <link>http://blog.simiacryptus.com/posts/deepartistorg_release_10/</link>
      <pubDate>Sun, 01 Sep 2019 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/deepartistorg_release_10/</guid>
      <description>I&amp;rsquo;m pleased today to announce the release of the Simiacryptus data tools v1.8.0, including the first version of a new image art publishing application named and located at DeepArtist.org - Notably using the subdomain examples.deepartist.org.
What is it? DeepArtist.org is an image processing platform using convolutional neural networks to perform state-of-the-art image processing techniques. This software is targeted at hobbyists and digital artists, and as such this documentation is focused on the practical tools provided to produce pretty pictures.</description>
    </item>
    
    <item>
      <title>The 2D Convolution: A Layer Development Story</title>
      <link>http://blog.simiacryptus.com/posts/the_2d_convolution_a_layer_development_story/</link>
      <pubDate>Thu, 22 Feb 2018 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/the_2d_convolution_a_layer_development_story/</guid>
      <description>Hello! Today we will be discussing many aspects of developing differentiable network layers in MindsEye as we explore the 2d convolution layer and its various implementations. First, for background, see my previous post about Test Driven Development with neural networks. Given these test facilities and perhaps more elemental layers, we need to construct a convolution layer that will work in large modern networks with large images as input.
Our first goal is to code a reference implementation, generally in pure java.</description>
    </item>
    
    <item>
      <title>Optimization Research</title>
      <link>http://blog.simiacryptus.com/posts/optimization_research/</link>
      <pubDate>Sat, 23 Dec 2017 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/optimization_research/</guid>
      <description>Now that I’ve cleaned up the testing and documentation of MindsEye, I have been able to re-focus on why I started writing it: Optimization Algorithm Research. In the course of playing with this code I have tried countless ideas, most of which taught me though failure instead of success&amp;hellip; However I do have two ideas, fully implemented and demonstrated in MindsEye, that I’d like to introduce today: Recursive Subspace Optimization allows deep networks to be trained effectively, and Quadratic Quasi-Newton enhances L-BFGS with a quadratic term on the line-search path.</description>
    </item>
    
    <item>
      <title>Test Driven Development for Neural Networks, Part II - AB Testing</title>
      <link>http://blog.simiacryptus.com/posts/test_driven_development_for_neural_networks_part_ii__ab_testing/</link>
      <pubDate>Wed, 13 Dec 2017 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/test_driven_development_for_neural_networks_part_ii__ab_testing/</guid>
      <description>In the last article, we covered a common testing framework for individual components, but we didn’t cover how these networks are actually trained. More specifically, how should we design a test suite to cover something so broad as optimization? A big problem here is that the components are heavily dependent on each other and also vary greatly in function and contract, and so there are few opportunities for generic testing and validation logic.</description>
    </item>
    
    <item>
      <title>Test Driven Development for Neural Networks, Part I - Unit Testing</title>
      <link>http://blog.simiacryptus.com/posts/test_driven_development_for_neural_networks_part_i__unit_testing/</link>
      <pubDate>Mon, 04 Dec 2017 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/test_driven_development_for_neural_networks_part_i__unit_testing/</guid>
      <description>A critical part of any good software is test code. It is an understatement that tests improve quality; they improve the scalability of the entire software development process. Tests let you write more code, faster code, better code. One of the leading testing methodologies is unit testing: the philosophy of breaking down software into individual components and testing each separately. It turns out that a great case study in unit test design also happens to be one of today’s hot tech topics - artificial neural networks.</description>
    </item>
    
    <item>
      <title>GPU-accelerated neural networks with CuDNN</title>
      <link>http://blog.simiacryptus.com/posts/gpuaccelerated_neural_networks_with_cudnn/</link>
      <pubDate>Wed, 02 Aug 2017 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/gpuaccelerated_neural_networks_with_cudnn/</guid>
      <description>A recent project that has huge implications for the field of AI is NVidia’s CuDNN library and related cuda-based libraries. Beyond simply being very useful and enabling hardware accelerated AI with cutting-edge performance, it establishes a common layer of high-performance mathematical primitives that, while using the hardware to its best extent, provides a common api to write software. With my recent addition of CuDNN-based layers, Mindseye should behave comparably with any other state-of-the-art deep learning library.</description>
    </item>
    
    <item>
      <title>What is the value of a human life?</title>
      <link>http://blog.simiacryptus.com/posts/what_is_the_value_of_a_human_life/</link>
      <pubDate>Sun, 21 May 2017 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/what_is_the_value_of_a_human_life/</guid>
      <description>Recent developments in MindsEye have yielded greatly increased speed and scalability of network training. Major improvements to the OpenCL kernels have increased speed in some tests by 50x or more, and data-parallel training has been tested with a Spark cluster. This combination of GPU and cluster computing support should bring MindsEye much closer to the performance and scale of other frameworks, if not in the competitive range! The componentization of the optimization code that I wrote about previously has enabled Spark support to be implemented in only about 100 lines in one self-contained class, a nice result of careful design.</description>
    </item>
    
    <item>
      <title>Autoencoders and Interactive Research Notebooks</title>
      <link>http://blog.simiacryptus.com/posts/autoencoders_and_interactive_research_notebooks/</link>
      <pubDate>Mon, 15 May 2017 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/autoencoders_and_interactive_research_notebooks/</guid>
      <description>Further research and development with MindsEye has produced two new features I would like to discuss today. The first is a working demonstration of a stacked sparse denoising image autoencoder, which is a fundamental tool in any deep learning toolkit. Second, I will introduce a useful tool for producing both static and interactive scientific reports, which I use to produce many of my demonstrations and conduct much of my research.</description>
    </item>
    
    <item>
      <title>A Unified Design Pattern for Continuous Parameter Optimization</title>
      <link>http://blog.simiacryptus.com/posts/a_unified_design_pattern_for_continuous_parameter_optimization/</link>
      <pubDate>Tue, 09 May 2017 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/a_unified_design_pattern_for_continuous_parameter_optimization/</guid>
      <description>Almost two years ago I developed a neural network library called MindsEye, which has largely sat idle since the release of TensorFlow. Recently however I’ve wanted to follow up on research involving neural networks, but I wanted a “pure” java option I could use for research. And so I decided it was time to revive my old project.
In this release, I have reviewed all of the code and made many improvements.</description>
    </item>
    
    <item>
      <title>Deblurring with TensorFlow</title>
      <link>http://blog.simiacryptus.com/posts/deblurring_with_tensorflow/</link>
      <pubDate>Sat, 02 Jan 2016 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/deblurring_with_tensorflow/</guid>
      <description>Blurred Image
Deblurred Image
Recently, Google open-sourced a toolkit called TensorFlow which provides a platform for neural networks. It provides a native core written in C, and many examples written in Python. Although the architecture is extensible and will hopefully will be usable from Java/Scala application code in the future, I took some time recently to evaluate it using Python to perform deconvolutions (a.k.a. deblurring), the same task I recently wrote about using my own NN library.</description>
    </item>
    
    <item>
      <title>RE: The anatomy of my pet brain</title>
      <link>http://blog.simiacryptus.com/posts/re_the_anatomy_of_my_pet_brain/</link>
      <pubDate>Sun, 18 Oct 2015 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/re_the_anatomy_of_my_pet_brain/</guid>
      <description>In my last post, I talked about a new project I was working on to explore convolutional neural networks (CNNs). I&amp;rsquo;ve spent much of the time since playing with and iterating on this library, and I wanted to take a moment to share what has been built so far. I&amp;rsquo;ve ended up with a library of 30 network layer types which can be wired in an arbitrary directed acyclic (non-recurrent) graph/network and perform gradient descent training and optimization.</description>
    </item>
    
    <item>
      <title>Fun with Deconvolutions and Convolutional Neural Networks in Java</title>
      <link>http://blog.simiacryptus.com/posts/fun_with_deconvolutions_and_convolutional_neural_networks_in_java/</link>
      <pubDate>Tue, 07 Jul 2015 00:00:00 +0000</pubDate>
      
      <guid>http://blog.simiacryptus.com/posts/fun_with_deconvolutions_and_convolutional_neural_networks_in_java/</guid>
      <description>I&amp;rsquo;ve gotten to an interesting point in my latest project, inspired by Google&amp;rsquo;s fascinating recent work with convolutional neural networks. The project can now apply inverse convolution operations using multiple fitness functions.
I wanted to explore the technology of image processing neural networks from the ground-up, so I started by building the fundamentals of a backpropagation neural network library. Building the basic components and solving the initial problems has been interesting, and surprisingly complex.</description>
    </item>
    
  </channel>
</rss>