Cybernet developed methods of determining the reliability of parallel computing architectures (including artificial intelligence, complex modeling, and simulation systems) used for critical applications. The methodology included defining precisely what constitutes correct behavior for the system from expert truth datasets, developing algorithms to statistically test outputs, and training users in the proper implementation of the method. This methodology is generalizable for any end product testing. For AI systems, the method would use datasets to compare rapid prototype and end-product performance testing to define system reliability. For neural nets, datasets would be used to provide stimulus-response pairs needed for learning algorithms.


Modeling and Simulation

View Other Programs

Making the Future Possible

Let’s begin a conversation about making your version of the future possible.