SUGGESTED READING: “Neural Networks”

Connect--But, be very careful

For the budding Data Scientist, Engineer or Manager


The following is part of a series on academic resources of interest to those of you–much like myself–are exploring the fields of data science, artificial intelligence, or machine learning.


Carse, B., & Oreland, J. (2000). Evolution and learning in neural networks: Dynamic correlation, relearning and thresholding. Adaptive Behavior8(3–4), 297–311. https://doi.org/10.1177/105971230000800305

Carse and Oreland (2000) report on how a population of neural networks evolve through a solution of a specific task, and how that task can be improved upon by lifetime learning of a differing task. Their review of other scholarly work in this area apply how biological evolution and computer neural network development are similar. They use a neural network simulation to determine that fitness is not only a function of evolution, but a reflection upon the “offsprings” produced in a virtual simulation and their longevity. They identify that learning, as applied to the neural network evolution exercise, allows for predictiveness as based upon applying learning to their study; learning and evolution combined have a greater impact upon neural network’s development than evolution alone (Carse & Oreland, 2000).

The intended audience is for academic students and professionals. Their study identifies the relationship within a neural network instantiation and how lifetime learning is vital to virtual (as well as biological) development (Carse & Oreland, 2000).

What makes the work unique is that it identifies the virtual and biological implications of how lifetime learning may increase a populations’ survivability and duration.



Gupta, D., & Rani, R. (2018). A study of big data evolution and research challenges. Journal of Information Science, 1-19. https://doi.org/10.1177/0165551518789880

Gupta and Rani (2018) describe that with the growing availability of data there is an associated growth in opportunities and challenges in the field of Big Data. They describe, for example, with over 7 billion searches on Google alone the demand for access to information, and businesses that fail to harness this information are likely to fail. At the core of their paper (Gupta & Rani, 2018), are the 3Vs of Big Data: volume, velocity, and variety. Volume is about the magnitude of data the is currently available to both the public and private sector, velocity is associated with the ability to accomplish real-time analytics, and variety addresses the current challenge of the heterogeneity of current day data types to include texts, images, tweets, etc. They conclude with the recognition that big data technologies and tools are still evolving and require greater effort and study by both academia and industry.

The intended audience is for the general through advanced individual trying to understand the broader challenges of Big Data and its likely evolutionary implementation for academia, private industry, or general public.

Gupta and Rani (2018) provide an initial roadmap of challenges and suggestions that is more than just an informative research paper of the state of Big Data; it is a major strength of the work as well.





Siqueira, I. (1999). Automated cost estimating system using neural networks. Project Management Journal, 30(1), 11–18. https://doi.org/10.1177/875697289903000103

The paper (Siqueira, 1999) provides a historical use of neural networks in the application of low-rise building cost estimating. The author applies a combination of C++ programming language, spreadsheet application, and neural network software application (Neural Shell 2) to manage and improve multivariate cost estimating for building costs within the construction industry. Siqueira (1999) recognizes that neural networks can be applied in a real-world situation through its ability to learn from prior information and training to produce a more refined cost estimate in the area of overall construction costs. The use of neural networks was able to predict direct costs of material, labor, and planning through its generalization capabilities. Siqueira (1999) concludes with the recognition that such tools are vital to the decision-making support process for companies using its own experiential data to create better predictions of future costs.

The intended audience is for private industry while it also affords a historical business case for academia to recognize the value of neural networks beyond the scientific and technologic fields.

While this was a historical use of neural networks, it demonstrates a simplified example for the general public interested in what neural networks are and what they are capable of accomplishing; this is an overall strength of the article and author’s ability to present a model for the general public.



Somers, M. J., & Casal, J. C. (2009). Using artificial neural networks to model nonlinearity: The case of the job satisfaction–job performance relationship. Organizational Research Methods, 12(3), 403–417. https://doi.org/10.1177/1094428107309326

Artificial Neural Networks (ANN) perform advanced pattern recognition and can identify nonlinear sequential relationships among a large data set of variables (Somers & Casal, 2009). The early application of ANNs were used in business and finance research to include predicting such outcomes as individual credit scores and likelihood of bankruptcies. The growth of ANNs were due to their greater predictive capabilities over previous and less-successful statistical methods. Somers and Casal (2009) focus on a direct application of ANNs in the area of theory development. One specific finding applying ANNs on how they can be used to understand and determine when an employee reaches a frustration threshold that causes, for example, high performing employees to quit their job. The work identifies the ability of neural networks to afford new research methodologies that can be applied to improving business insight in the area of employee retention and job satisfaction (Somers & Casal, 2009).

The target audience includes an appreciation of the potential of ANN for the average readers as well as advanced reader with a deeper understanding of neural networks and multiple regression modeling.

The overall strength of the article was an explanation of ANNs and their application in a real-world challenge of modeling and determining what causes an employee to leave a job. Its discussion and explanation of how an ANN learns in an artificial manner (Somers & Casal, 2009, p. 406).



Warwick, K. (2010). Cultured neural networks. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering224(2), 109–111. https://doi.org/10.1243/09596518JSCE916

Warwick (2010) describes the use of cultured and biological neural networks, in vitro, applied as a potential solution to decision-making control systems between the biological and the robotic. Use of both lamprey and rat brains have been cultured and used to interface and study simpler use of biological neurons and electronic components using non-destructive electrodes to control simple robots in a laboratory environment. These studies are more attuned with biological learning, and associated observation that the robotic component’s performance improves progressively. Furthermore, the study (Warwick, 2010) did discover that as the size of the in vitro culture increased in size, it added complexity that the experimenters found useful insight difficult. The work did identify that neural networks provided a novel mechanism to support and provide needed feedback to control systems to address time delays to the robotic element that remained problematic to the overall study.

The targeted audience includes both bio-medical professionals and technical individuals looking at the potential of biological-machine interface challenges.

The article provided a reinforcing context to the reader on how the similarities of neural networks can improve the man-machine interface. Furthermore, it provides a potential ethical challenge for the community to include Frankenstein-like concerns. It would have been helpful if the author had at least cursorily explored the future legal, moral, and ethical challenges of this work.



Wilner, A. S. (2018). Cybersecurity and its discontents: Artificial intelligence, the Internet of Things, and digital misinformation. International Journal73(2), 308–316. https://doi.org/10.1177/0020702018782496

Wilner (2018) advises the nature of cybersecurity is unsurprisingly fluctuating due to such ongoing challenges within Artificial Intelligence (AI), Internet of Things (IOT), and digital misinformation. He suggests a contentious nature of the term cybersecurity, and further prefers the term information security as being a more accurate. He continues to recommend that cybersecurity is a broader and evolutionary term while information security is a sub-component. He continues an exploration of the three major topics, and specific to neural networks, AI affords the most likely ability to analyze large volumes of data and identify critical insights. He concludes with the greatest problem is the inability of humans who create such AI-based decisions may be too difficult for human understanding. His concern rests with the AI algorithms, and how the actual developed outputs remain unclear even to the experts.

The target audience includes the novice to the expert attempting to understand strategic issues facing cybersecurity overall.

The relevance to the topic is the further concerns within the cybersecurity community specific to AI and neural networks. Can we resolve the issue of understanding the base AI algorithms with the final outputs?

There are several observed weaknesses. One example includes that the author suggests that the new problem with digital misinformation campaigns is just the prevalence of the data itself. The real issue is the ability for nation-state actors, etc., able to make the information appear realistic.  This would include the use of Photoshop, forging letterhead or signatures.



Yilmaz, A., & Sabuncuoglu, I. (2000). Input data analysis using neural networks. SIMULATION74(3), 128–137. https://doi.org/10.1177/003754970007400301

Yilmaz and Sabuncuoglu (2000) explore the use of neural networks to select appropriate probability distributions. They accomplish this through the identification and analysis of data input to categorize the best probability distribution. They consider three types of neural networks to include: 1) back-propagation, 2) counter-propagation, and 3) probabilistic neural networks. They suggest through exhaustive computational experiments that counter-propagation provides superior performance and outcomes for the basis of the analysis. The results of the selected neural network type were 97.4% successful in determining the best probability distribution.  Furthermore, as the sample size increased, the overall performance improved—as expected. They concluded that their input data analysis could be used as a reference model for the application of neural networks, and this approach was worthy of future research to improve the field of probability distribution selection (Yilmaz & Sabuncuoglu, 2000).

The audience for this technical article is written for advanced data science experts requiring a deeper knowledge of the field of neural networks and their applications.

The topic was relevant in that it further delved into sub-categories and types of neural networks available. The article provided a deeper statistical discussion about the use of neural networks in identifying the best probability distribution.

There was major weakness that should have been more developed.  The weakness was the lack of clarity of how Yilmaz and Sabuncuoglu (2000) were able to quickly down-select to counter-propagation as the neural network type of choice. This was a highly technical paper requiring a greater understanding of neural networks.

%d bloggers like this: