Nanotechnology And Artificial Intelligence.


    Context - Is Nanotechnology Used In Artificial Intelligence?

    Biotechnology, information technology, and nanotechnology are three technologies that are increasingly reliant on current technical and scientific advancement. 

    The concept of combining bioscience, artificial intelligence (AI), and nanotechnology will usher in yet another revolution in science and technology, one that has been in the works for more than a decade. 

    Nonetheless, the anticipated interdisciplinary research integration is still in the works. 

    • Nanotechnology combines engineering and physical sciences knowledge; it is one of the most significant developing technological sectors, with applications in medical, engineering, and agricultural. 
    • AI is a method of incorporating human-like reasoning into any technological device. This is a study of how the human brain thinks, learns, chooses, and functions as it tries to solve issues.


    For the construction of common and most successful models, such as artificial neural networks (ANNs) and other similar algorithms, it is largely influenced by biological anatomy. 

    Improved machine functions linked to human intellect, such as reasoning, thinking, and problem solving, is an important AI aim. 

    • AI is being used in a growing number of disciplines, not just within AI itself, where machine learning, deep learning, and ANNs are increasingly effective approaches in their own right, but also in the number of domains and businesses where it now reigns supreme. 
    • AI, in conjunction with the Internet of Things (IoT) and other emerging sectors, has already changed many production and monitoring processes across a variety of industries, and the trend is continuing. 

    Nanotechnology is largely made up of sophisticated systems that aren't always compatible with certain parts of AI. 

    Nanotechnology, on the other hand, is thought to be a technique that AI will employ to converge to oneness. 

    Though such a picture may still seem futuristic, existing technology has begun to show signs of a similar harmonization. 

    • From fast-paced AI-assisted nanotechnology research to generating state-of-the-art materials to expanding the application area of AI utilizing nanotechnology-based computing devices, combining these two technologies may result in significant advances. 
    • A combined research may not only merge the two technologies, but it can also offer a boost to research in each area, perhaps leading to a slew of new techniques for gathering information and communication technologies. 
    • Biotechnology, cognitive studies, nanotechnology, robotics, AI, information and communication technology (ICT), and the sciences dealing with such issues are all exposed to larger political and societal debates. 
    • Meanwhile, AI has been used in nanoscience research for a variety of purposes, including analyzing exploratory procedures and assisting in the creation of novel nanodevices and nanomaterials. 

    There are various reasons why AI paradigms are used in nanoresearch. 

    • Nanotechnology is afflicted by the natural limits of size; the governing physical rules are vastly different from those that apply in other situations. 
    • As a result, one of the flaws that nanotechnology must solve is the right explanation of the consequences obtained from any such system (Ly et al. 2011). 
    • To make matters worse, the signal is highly influenced by numerous components in many systems. 
    • In these instances, developing theoretical approximations is difficult, hence simulation approaches have been used to achieve exact elucidations of the investigative outcomes. 

    Various AI machine learning paradigms may be used to generate research results as well as produce nanoapplications in the future. 

    • These strategies are particularly useful when dealing with a large number of connected factors at the same time, and they may effectively express and simplify complex/unknown data or functions (Mitchell 1997; Bishop 2006). 
    • Machine learning methodologies such as ANNs, a collection of weighted linked nodes, and link weights are used to investigate these types of functions, which will be quite valuable, utilizing the monitored or unmonitored algorithm. 

    Other AI approaches can tackle a variety of optimization and search challenges. 

    • There are various machine learning approaches that may be used in nanotechnology research for complex categorization, prediction, correlation, data mining, clustering, and other control issues. 
    • These techniques include decision trees, support vector machines, Bayesian networks, and others. 
    • A few studies have also been conducted on how AI techniques can take advantage of the computational power boost offered by future nanomaterials developed by nanoscience and used for fabricating nanodevices, and nanocomputing will provide powerful dedicated architectures for applying machine learning techniques. 


    The following section explores the bidirectional link between AI and nanotechnology via a variety of examples and applications. 


    In the nanoworld, scanning probe microscopy (SPM) is the most widely used imaging technology. 

    • This notion encompasses a variety of methods for obtaining pictures via the interaction of a pattern and a probe. 
    • The tunneling current between the pattern and the probe is used to characterize the pattern topography via their interaction. 
    • After the creation of the nanoscope, other strategies were created by altering the contacts between the tip and the sample. 

    SPM may also be used to manipulate atoms on a smaller scale. 

    Despite numerous attempts to improve the judgment and the ability to control atoms, there are still challenges with the interpretation of tiny information. 

    The probe–sample interactions are difficult to understand and are influenced by a variety of factors. 

    AI solutions might be a lifesaver in resolving such problems. 

    • In recent years, advances in multimodal SPM imaging for acquiring more complementary information (approximately the pattern) have created a massive quantity of data, making it even more difficult to understand individual sample attributes. 
    • To address this problem, a technique known as functional identification imaging (FR-SPM) has been developed, which seeks a direct identity of local behaviors detected from spectroscopic responses using neural networks trained on examples supplied by an expert. 
    • The cellular genetic algorithm (cGA), a Gas subclass, is based entirely on the evolutionary optimization method and is used to automate the imaging operation in SPM using software capable of enhancing the probe's exact state and related control parameters. 
    • As a result, superior atomic resolution images may be acquired with no human involvement other than the preparation of samples and tips (Huy et al. 2009; Woolley et al. 2011). 
    • ANNs are widely utilized for categorizing numerous behavioral, structural, and physical aspects of nanomaterials on the nanoscale, which are employed in a wide range of applications, including carbon nanotubes, quantum-dot semiconductor optics and devices, chemical technology, and manufacturing. 


    ANNs have recently been employed to investigate the nonlinear connection between input factors and output responses in the transparent conductive oxide deposition process. 

    • In optoelectronic devices such as solar cells, organic LEDs, and flat-panel displays, this kind of thin film is now utilized as an electrode (Bhosle et al. 2006). 
    • Better nanoantenna shapes were also developed through evolutionary optimization, outperforming the best existing radio-wave type of reference antennas. 
    • The fittest antenna shape among the linear dipole antennas, according to GA, combines the properties of the split-basic ring's magnetic resonance with the electrical one (Feichtner et al. 2012). 
    • By thoroughly researching the working principles of the generated geometries, this strategy will develop nanoantenna structures for unique uses and appropriately supply novel layout techniques. 
    • GAs have also been detected in the area of nano-optics. 
    • A careful design of nanoparticle mild concentrators will have a significant influence on a variety of nanooptics applications, including optical manipulators, solar cells, plasmon-enriched photodetectors, modulators, and nonlinear optical devices. 



    One of the primary challenges that scientists encounter while working at the nanoscale is the tool simulation that is being investigated, since genuine optical photographs at the nanoscale are not possible. 

    At this scale, images must be interpreted, and numerical simulations are sometimes the best method for obtaining an exact scheme of what is there in the picture. 

    • Nonetheless, they are difficult to use in many situations, and numerous factors must be considered in order to get an acceptable system representation. 
    • AI can help here by improving simulation performance and making data collection and interpretation easier. 
    • When functioning at the nanoscale, the use of ANNs in numerical simulations has been shown to be useful in a variety of ways. 
    • To begin, the software program may be manually modified to maintain the stability of numerical exactness and physical implications. 
    • Another use of ANNs in simulation software is to reduce the complexity of associated settings (Castellano-Hernández et al. 2012). 


    The combination of AI with existing and emerging nanocomputing technologies yields a wide range of applications (Service 2001; Bourianoff 2003). 

    • Since the inception of nanocomputers, AI paradigms have been utilized for different degrees of modeling, developing, and building prototypes of nanocomputing devices. 
    • Machine learning techniques applied to semiconductor-based hardware with the help of nano-hardware may also offer a basis for a new less expensive and portable age of computing that can include high overall performance computing, including programs, sensory data processing, and control activities (Uusitalo et al. 2011; Arlat et al. 2012). 

    Such challenges emerge in a variety of situations, but primarily with Big Data, which necessitates "computational intelligence" (Ladd et al. 2010; Maurer et al. 2012). 

    In this context, natural computing is usually done using several methodologies. 

    • Apart from various natural computing approaches, techniques such as DNA computing or quantum computing are now being thoroughly investigated (Darehmiraki 2010; Razzazi and Roayaei 2011; Ortlepp et al. 2012; Zha et al. 2013). 
    • Many variables are used in DNA computing. This is an example of how DNA computing AI methodologies may be used to purchase a final result from a little preliminary data collection, avoiding the need of all possible solutions. 

    Other approaches to examine are evolutionary and GAs. 

    Eventually, nanocomputing systems—of which only a handful are bioinspired—will include a broad range of new nanotechnologies. 

    These technologies will be able to leverage new data versions to apply machine learning paradigms in order to solve complicated issues in a broad range of applications when new physical working bases, reconfigurable architectural storage, and computational methods accumulate. 


    Food science is rapidly evolving in tandem with nanotechnology. 

    • Nanotechnology is the solution to the food market's desire for technology that is critical to maintaining market leadership within the food processing business in order to generate dependable, appropriate, and tasty fresh food items. 
    • Preservatives and wrapping are both done using nanoparticles ("nano inside," "nano outside"). 
    • Nanoscale food additives may be utilized to affect product flavor, nutritional content, shelf life, and texture; they may even be used to detect infections; and they may serve as quality indicators. 
    • Nanotechnology opens up a wide range of possibilities for new product development and food system applications. 
    • Research & development opportunities for food additives and packaging are aided by AI tactics. 


    It has been demonstrated that a large number of nanosystems are capable of interacting with living neurons. 

    • Because the detectors are threshold devices similar to spiking neurons, a few CNT features enable us to set down nanotube detectors that could assist implement the pulse-train neural network function (Lee et al. 2003). 
    • The identification of unstable chemical compounds using CNT-covered acoustic and visual sensors is a promising application of ANN algorithms. 
    • It is concluded that a first-rate categorization exchange may be realized by combining multiple modules of auditory and optical sensors, which is the state of affairs in which ANNs can fully realize their potential (Penza et al. 2005). 
    • On the domain of pharmacology and nanomedicine, ANNs have been deemed a well-known device for nanoparticle training analysis and modeling, with a high ability effect in chronic disease (Zarogoulidis et al. 2012). 

    Nanobots developed by researchers at the University of California, San Diego (UCSD) are capable of purifying the blood of toxins produced by bacteria. 

    These nanobots are roughly a quarter of the width of a human hair and can move 35 meters per second by "swimming" through the blood while being propelled by ultrasound. 

    • Nanobots developed by MIT researchers in 2018 are so small and light that they might float through the air. 
    • Linking 2D electronic additives to minute particles measuring between one billionth and one millionth of a meter may make this nanotechnology feasible. 

    The ultimate product is a robot that is about the size of an ovum or a grain of sand. 

    • The combination of photodiode semiconductors, which can detect radiation from an optical region and convert it to an electrical signal, allows for a constant supply of power to the environmental sensors installed in these robots. 
    • The modest electrical fee created is sufficient to allow this technology to function without the need of a battery. 

    When it comes to the value of these nanobots, the researchers want to send them on missions to faraway regions to expose things like pipelines and the human digestive system. 

    This minuscule emissary may be released into the opening, allowed to flow in the pipe's direction, and then retrieved at the pipe's exit. 

    The data acquired by its sensors, which include the spatiotemporal attention of positive chemical compounds such as hormones and enzymes, may then be downloaded and considered once harvested. 



    Many difficulties that arise in the research of nanotechnology might be solved using AI. 

    • The usage of ANNs and GAs has been investigated in a variety of scenarios, ranging from data interpretation in a microscope scanning probe to the characterization and classification of nanoscale fabric characteristics. 
    • It has also looked at numerous initiatives to build nano-machines and utilize them to implement cutting-edge synthetic intelligence paradigms. 
    • These ground-breaking initiatives call for a true confluence of nanotechnology and artificial intelligence in high-performance computer systems enabled by biomaterial-based nanocomputing devices. 

    Finally, the considerable capacity effect on the usage of AI techniques has been shown. 

    • Nanotechnology, on the other hand, has been applied in biomedical research, therapeutic applications, and food science. 
    • Nanotechnology focuses on bottom-up design, while AI research usually takes a top-down approach to solving problems. 
    • The merging of those areas will create approaches for a variety of complex problems that need several layers of explanation and relationships. 
    • As previously said, nanotechnology and artificial intelligence (AI) may assist in this attempt to revive.

    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram

    You may also want to read more about Artificial Intelligence here.

    You May Also Want To Read More About Nano Technology here.


    Arlat, Jean, Zbigniew Kalbarczyk, and Takashi Nanya. “Nanocomputing: Small devices, large dependability challenges.” IEEE Security & Privacy 10, no. 1 (2012): 69–72.

    Bhosle, V., A. Tiwari, and J. Narayan. “Metallic conductivity and metal-semiconductor transition in Ga-doped ZnO.” Applied Physics Letters 88, no. 3 (2006): 032106.

    Bishop, Christopher M. Pattern Recognition and Machine Learning. Berlin: Springer (2006).

    Bourianoff, George. “The future of nanocomputing.” Computer 36, no. 8 (2003): 44–53.

    Castellano-Hernández, Elena, Francisco B. Rodríguez, Eduardo Serrano, Pablo Varona, and Gomez Monivas Sacha. “The use of artificial neural networks in electrostatic force microscopy.” Nanoscale Research Letters 7, no. 1 (2012): 1–6.

    Darehmiraki, Majid. “A semi-general method to solve the combinatorial optimization problems based on nanocomputing.” International Journal of Nanoscience 9, no. 5 (2010): 391–398.

    Feichtner, Thorsten, Oleg Selig, Markus Kiunke, and Bert Hecht. “Evolutionary optimization of optical antennas.” Physical Review Letters 109, no. 12 (2012): 127701.

    Huy, Nguyen Quang, Ong Yew Soon, Lim Meng Hiot, and Natalio Krasnogor. “Adaptive cellular memetic algorithms.” Evolutionary Computation 17, no. 2 (2009): 231–256.

    Ladd, Thaddeus D., Fedor Jelezko, Raymond Laflamme, Yasunobu Nakamura, Christopher 

    Monroe, and Jeremy Lloyd O’Brien. “Quantum computers.” Nature 464, no. 7285 (2010): 45–53.

    Lee, Ian Y., Xiaolei Liu, Bart Kosko, and Chongwu Zhou. “Nanosignal processing: Stochastic resonance in carbon nanotubes that detect subthreshold signals.” Nano Letters 3, no. 12 (2003): 1683–1686.

    Ly, Dung Q., Leonid Paramonov, Calvin Davidson, Jeremy Ramsden, Helen Wright, Nick Holliman, Jerry Hagon, Malcolm Heggie, and Charalampos Makatsoris. “The matter compiler-towards atomically precise engineering and manufacture.” Nanotechnology Perceptions 7, no. 3 (2011): 199–217.

    Maurer, P.C., G. Kucsko, C. Latta, L. Jiang, N.Y. Yao, S.D. Bennett, F. Pastawski, D. Hunger, N. Chisholm, M. Markham, and D.J. Twitchen. “Room-temperature quantum bit memory exceeding one second.” Science 336, no. 6086 (2012): 1283–1286.

    Mitchell, Tom M. Machine Learning. Maidenhead: McGraw Hill (1997).

    Ortlepp, Thomas, Stephen R. Whiteley, Lizhen Zheng, Xiaofan Meng, and Theodore Van Duzer. “High-speed hybrid superconductor-to-semiconductor interface circuit with ultra-low power consumption.” IEEE Transactions on Applied Superconductivity 23, no. 3 (2012): 1400104.

    Penza, M., G. Cassano, P. Aversa, A. Cusano, A. Cutolo, M. Giordano, and L. Nicolais. “Carbon nanotube acoustic and optical sensors for volatile organic compound detection.” Nanotechnology 16, no. 11 (2005): 2536.

    Razzazi, Mohammadreza, and Mehdy Roayaei. “Using sticker model of DNA computing to solve domatic partition, kernel and induced path problems.” Information Sciences 181, no. 17 (2011): 3581–3600. 

    Service, Robert F. “Nanocomputing. Assembling nanocircuits from the bottom up.” Science 293, no. 5531 (2001): 782.

    Uusitalo, Mikko A., Jaakko Peltonen, and Tapani Ryhänen. “Machine learning: How it can help nanocomputing.” Journal of Computational and Theoretical Nanoscience 8, no. 8 (2011): 1347–1363.

    Woolley, Richard A. J., Julian Stirling, Adrian Radocea, Natalio Krasnogor, and Philip Moriarty. “Automated probe microscopy via evolutionary optimization at the atomic scale.” Applied Physics Letters 98, no. 25 (2011): 253104.

    Zarogoulidis, Paul, Ekaterini Chatzaki, Konstantinos Porpodis, Kalliopi Domvri, Wolfgang Hohenforst-Schmidt, Eugene P. Goldberg, Nikos Karamanos, and Konstantinos Zarogoulidis. “Inhaled chemotherapy in lung cancer: Future concept of nanomedicine.” International Journal of Nanomedicine 7 (2012): 1551.

    Zha, Xinwei, Chenzhi Yuan, and Yanpeng Zhang. “Generalized criterion for a maximally multi-qubit entangled state.” Laser Physics Letters 10, no. 4 (2013): 045201.

    Applied Artificial Intelligence for a Post COVID-19 Era


    In the post-COVID-19 era, businesses will use artificial intelligence (AI) in a variety of ways, according to this article. We demonstrate how AI can be used to create an inclusive paradigm that can be applied to companies of all sizes.

    Researchers may find the advice useful in identifying many approaches to address the challenges that businesses may face in the post-COVID-19 period. 

    Here we examine a few key global challenges that policymakers can remember before designing a business model to help the international economy recover once the recession is over.

    Overall, this article aims to improve business stakeholders' awareness of the value of AI application in companies in a competitive market in the post-COVID-19 timeframe.

    The latest COVID-19 epidemic, which began in December in Wuhan, China, has had a devastating effect on the global economy. 

    It is too early to propose a business model for businesses that would be useful until the planet is free of the COVID-19 pandemic during this unparalleled socioeconomic crisis for business. 

    Researchers have begun forecasting the effects of COVID-19 on global capital markets and its direct or indirect impact on economic growth based on current literature on financial crises or related exogenous shocks.

    Following the failure of Lehman Brothers in, a body of literature has emerged that focuses on the application of emerging technology such as artificial intelligence to the ‘Space Economy' (AI). Existing AI research demonstrates the AI's applicability and usefulness in restructuring and reorganizing economies and financial markets around the world.

    The implementation of this technology is extremely important in academia and practice to kick-start economic growth and reduce inequalities in resource distribution for stakeholders' development. 

    Based on the topic above, the aim of this article is to determine the extent of AI use by companies in the post-COVID-19 crisis era, as there are few comprehensive studies on the effect of using AI to resolve a pandemic shock like the one we are witnessing at the start of the year. 

    To the best of our understanding, this is the first report to demonstrate the potential for AI use by businesses in the COVID-19 recovery process.

    Advantages in Using AI Until COVID-19 Is Over.

    Companies may increase the value of their businesses by lowering operational costs. According to Porter COVID-19, firms use their sustainability models to gain a comparative edge over their competitors. Dealing with big data generated by fast knowledge traffic across the Internet has been one of the biggest problems faced by businesses over the last decade.

    To fix this problem, businesses have begun to use artificial intelligence (AI) to boost the global economy COVID-19. 

    Small to medium-sized businesses, including large corporations, benefit from government interventions that force them to think creatively. 

    Furthermore, when implementing AI, these firms make some disruptive improvements to their operations.

    The construction of such infrastructure by large, medium, and small businesses has a positive effect on many countries' jobs, GDP, and inflation rates, to name a few. 

    Furthermore, the use of a super-intelligent device opens new possibilities for businesses of all sizes, allowing for the transfer of critical data in a matter of nanoseconds.

    As a result, the economy's growth is noticeable because businesses of all sizes, especially in advanced countries, can use this sophisticated and effective business model built on advanced technology like AI. 

    Big data processing enables businesses to reduce the percentage of error in their business models.

    Furthermore, the deployment of these emerging technologies has expanded global collaboration and engagement as awareness and research and development (R&D) continue to spread globally from one country to another. 

    Competition among rivals in the same market, as well as between large and small companies, influences competition in the search for a long-term business model. 

    By incorporating user-friendly technology into everyday life, AI-based models allow businesses to enter rural or underdeveloped areas.

    In the absence of a person, a digital-biological converter, for example, will render a variety of copies of flu vaccines remotely to benefit the local health system. 

    As a result, different sectors such as health, transportation, manufacturing, and agriculture contribute to the growth of the country's economy, which has an impact on the global economy.

    During the financial crisis of 2008, businesses' use of AI remained relatively constrained. Companies are now attempting to use a hybrid Monte Carlo decision-making method in the increasingly unstable post-coronavirus timeframe due to rapid technological advancements. 

    Companies must understand the extraordinary harm inflicted by the novel coronavirus before adapting AI-based models to stabilize the economy from the current recession, which is not equivalent to past financial crises, such as the crash of Lehman Brothers.


    AI for Global Development in the Post-COVID-19 Era

    One of the main environmental problems of recent decades has been to limit global warming below 2 degrees Celsius in order to minimize the chance of biodiversity loss. The human and animal kingdoms' livelihoods are also at risk because of accelerated climate change.

    According to several reports (, failing to protect biodiversity can pose a challenge to humans. Furthermore, modern business practices affect the climate, and may cause a dangerous virus to take up residence in a human being. 

    As a result, biodiversity conservationists must maintain a broad archive related to industry that is impossible to obtain manually.

    Businesses must first find ecosystems to preserve before establishing wildlife corridors, which are extremely important biologically. 

    Consider the states of Montana and Idaho in the United States. The AI-assisted device is being used by wild animal conservation scientists to monitor and document the movements of wild animals. As a result, the organization will use AI embedded technology to reduce biodiversity threats and continue to focus on sustaining climate change throughout the post-COVID-19 pandemic era.

    The vast application of AI can be seen in the healthcare industry, which is a major problem for all countries. During the recent pandemic, we saw the relevance of active learning and cross-population test models, as well as the use of AI-driven methods. 

    For example, robotics can clean hospitals to aid health workers, D printers can produce personal protective equipment (PPE) for health workers in hospitals and nursing homes, and a smartphone-enabled monitoring device can detect close contact between infected people, to name a few examples. 

    We can see an introduction of AI among healthcare businesses in the past decades, like the COVID-19 pandemic.

    For example, IBM Watson Health's AI scheme has been used in conjunction with Barrow Neurological Institute to coordinate the study of several trials to draw conclusions regarding the genes linked to Amyotrophic Lateral Sclerosis (ALS) disorder.

    Furthermore, only modern equipment allows for remote treatment without endangering the health care provider's safety. As a result, after we've recovered from the recession, businesses will need to analyze a massive amount of data from any impacted country using their AI-based forecasting model.

    This will help to reduce the chances of another pandemic occurring in the future. In recent days, we've seen a massive investment in renewable energy from both the public and private sectors in both developed and developing countries (Bloomberg NEF). 

    With the assistance of AI-based technologies, businesses will start using their invested capital and produce more units of renewable energy (or green energy) in the post-COVID-19 period. 

    Quantum computing, for example, will cause a plasma reaction in a nuclear fusion reactor, reducing the use of fossil fuels and producing renewable energy.

    Companies may also rely on assisting major companies in finding a technology-enhanced way to manage the expense of the cooling system in the big data center. Deep mind is an example of cost-effective, smarter energy used by large corporations such as Google.

    We may observe a dead subjectivity in metaphysical zombies (p-zombies) generated by non-self-improving AI. Companies can solve complex issues using biological or artificial neural networks COVID-19, or they can use AI that does not self-improve even when communicating with government systems, by integrating AI with current technologies. 

    Industry should concentrate on a limited time span to develop an accurate early forecasting model with a specific dataset to test the suitability of an AI program.

    If companies will learn how to reduce the cost of AI application, how to integrate AI with time COVID-19, and how to manage different parameters of global issues using AI COVID-19, they can be more effective. As a result, the global control mechanism would be able to implement a small superintelligence for the good of humanity. 

    An Investigation into the Use of Artificial Intelligence in Cryptocurrency Forecasting 

    Let's look at an example of AI in action with real-time details. In this part, we demonstrate how artificial intelligence can be used in time-series forecasting, specifically using an artificial neural network (ANN).

    The ANN is made up of a vast number of strongly integrated processing components, like how human brains function. The use of neural networks in natural language processing and computer data visioning is now considered one of the most advanced approaches for natural language processing and computer data visioning.

    For example, the ANN algorithm outperforms several single or hybrid classical forecasting techniques such as ARIMA and GARCH in a study on bank and company bankruptcy prediction. In this short experiment, we forecast a sample using a mixture of well-known neural network algorithms including long short-term memory (LSTM), time-lagged neural network (TLNN), feed-forward neural network (FNN), and seasonal artificial neural network (SANN) (time-series). We measure the monthly average closing price in each year from the regular observations to make our study straightforward. We use a percentage of this data as research data and a percentage of this data as training data.

    The four models listed above use this training approach to try to recognize regularities and trends in the input data, learn from historical data, and then provide us with generalized forecast values based on previously established knowledge. 

    As a result, the system is self-adaptive and non-linear. As a result, it defies a priori statistical distribution assumptions. Our experiment shows that the LSTM model is a safer approach for forecasting bitcoin market movement based on the optimum parameters—such as root mean square errors (RMSE).

    It shows that the price of cryptocurrencies has been declining since January of this year. However, as transaction costs and other financial or environmental exogenous shocks, such as economic lockout due to COVID-19, are factored in, the model becomes more complicated. 

    Note that the aim of the above-mentioned experiment is to demonstrate the applicability of ANN rather than to draw policy conclusions from the findings.

    The Difficulties in Using AI Since the COVID-19 Crisis Has Ended.

    The AI ushers in a new era in the global economy. However, several reports, such as Roubini COVID-19 and Stiglitz COVID-19, pose significant concerns about the use of AI in the World Economic Forum (WEF). They state that a significant amount of money and R&D is needed to invest in AI-enabled robots that can perform complex tasks. provided the details.

    In the rising economy, there is a limited potential to incorporate both small and large enterprises in the same model, which might not be viable. According to current research, a large work loss will stifle economic growth COVID-19. As businesses are willing to use alternate digital money such as cryptocurrencies, the economy's uncertainty may increase.

    A lack of resources for small businesses can result in a wider performance gap between the public and private sectors, or between small and large businesses.

    This could limit the reliability and precision of big data processing and the implementation of a universal business model. The ability of a small group of businesses to use AI to their advantage could stifle global economic growth. Furthermore, there is the possibility of a disastrous AI risk.

    The problems associated with AI protection or alignment can be a major source of concern for businesses, particularly in the aftermath of the Coronavirus outbreak, where there could be a shortage of qualified personnel. Companies should rely on forward thinking taxonomy because it is difficult to be positive of potential uncertainty.

    For example, a bio hacking company might use AI to decipher reported genomes, potentially causing a multi-pandemic COVID-19, and such a business model could build neural interfaces that negatively impact human brains. As a result, it's also unclear to what degree businesses will be able to use AI efficiently and successfully after the global economy has recovered from the COVID-19 pandemic.

    We discuss a few challenges and major benefits that any company can take advantage of in the post-COVID-19 timeframe.

    However, we recognize that we face enormous problems, and policymakers from all over the world should work together to address these concerns.

    One of the key challenges facing policymakers is determining how to incorporate responsible commercial practices in order to safely transfer data so that it can be analyzed by AI-based technologies for the good of society. Local and foreign decision-makers must express their experience in order to inform the general public about technologies and reduce the chance of job loss.

    Furthermore, by developing COVID-19 for "Artificial Intelligence Marketing," the world's economic growth can be restructured if regulators enable businesses to use AI to improve production-led profitability and mitigate risk through creative methods. 

    We expect AI-led businesses to outperform all human tasks as soon as the global economy recovers from the COVID-19 pandemic, based on other studies' forecasts. 

    In a nutshell, AI technologies in the post-COVID-19 period will allow individuals and businesses to collaborate for accelerated global growth by outweighing the negative aspects of technology use in society.

    You may also like to read more analysis about applied technology during the COVID-19 pandemic here.

    Software Development - Science, Engineering, Art, or Craft

    An opinionated look at our trade


    There is general consensus that the software development process is imperfect, sometimes grossly so, for any number of human (management, skill, communication, clarity, etc) and technological (tooling, support, documentation, reliability, etc) reasons.  And yet, when it comes to talking about software development, we apply a variety of scientific/formal terms:
    1. Almost every college / university has a Computer Science curriculum.
    2. We use terms like "software engineer" on our resumes.
    3. We use the term "art" (as in "art of software development") to acknowledge the creative/creation process (the first volume of Donald Knuth's The Art of Computer Programming was published in 1968)
    4. There is even a "Manifesto for Software Craftsmanship" created in 2009 (the "Further Reading" link is a lot more interesting than the Manifesto itself.)
    The literature on software development is full of phrases that talk about methodologies, giving the ignorant masses, newbie programmers, managers, and even senior developers the warm fuzzy illusion that there is some repeatable process to software development that warrants words like "science" and "engineer."  Those who recognize the loosey-goosey quality of those methodologies probably feel more comfortable describing the software development process as an "art" or a "craft", possibly bordering on "witchcraft."
    The Etymology of the Terms we Use
    By way of introduction, we'll use the etymology of these terms as a baseline of meaning.
    "what is known, knowledge (of something) acquired by study; information;" also "assurance of knowledge, certitude, certainty," from Old French science "knowledge, learning, application; corpus of human knowledge" (12c.), from Latin scientia "knowledge, a knowing; expertness," from sciens (genitive scientis) "intelligent, skilled," present participle of scire "to know," probably originally "to separate one thing from another, to distinguish," related to scindere "to cut, divide," from PIE root *skei- "to cut, to split" (source also of Greek skhizein "to split, rend, cleave," Gothic skaidan, Old English sceadan "to divide, separate;" see shed (v.)).
    From late 14c. in English as "book-learning," also "a particular branch of knowledge or of learning;" also "skillfulness, cleverness; craftiness." From c. 1400 as "experiential knowledge;" also "a skill, handicraft; a trade." From late 14c. as "collective human knowledge" (especially "that gained by systematic observation, experiment, and reasoning). Modern (restricted) sense of "body of regular or methodical observations or propositions concerning a particular subject or speculation" is attested from 1725; in 17c.-18c. this concept commonly was called philosophy. Sense of "non-arts studies" is attested from 1670s.
    mid-14c., enginour, "constructor of military engines," from Old French engigneor "engineer, architect, maker of war-engines; schemer" (12c.), from Late Latin ingeniare (see engine); general sense of "inventor, designer" is recorded from early 15c.; civil sense, in reference to public works, is recorded from c. 1600 but not the common meaning of the word until 19c (hence lingering distinction as civil engineer). Meaning "locomotive driver" is first attested 1832, American English. A "maker of engines" in ancient Greece was a mekhanopoios.
    early 13c., "skill as a result of learning or practice," from Old French art (10c.), from Latin artem (nominative ars) "work of art; practical skill; a business, craft," from PIE *ar-ti- (source also of Sanskrit rtih "manner, mode;" Greek arti "just," artios "complete, suitable," artizein "to prepare;" Latin artus "joint;" Armenian arnam "make;" German art "manner, mode"), from root *ar- "fit together, join" (see arm (n.1)).

    In Middle English usually with a sense of "skill in scholarship and learning" (c. 1300), especially in the seven sciences, or liberal arts. This sense remains in Bachelor of Arts, etc. Meaning "human workmanship" (as opposed to nature) is from late 14c. Sense of "cunning and trickery" first attested c. 1600. Meaning "skill in creative arts" is first recorded 1610s; especially of painting, sculpture, etc., from 1660s. Broader sense of the word remains in artless.
    Old English cræft (West Saxon, Northumbrian), -creft (Kentish), originally "power, physical strength, might," from Proto-Germanic *krab-/*kraf- (source also of Old Frisian kreft, Old High German chraft, German Kraft "strength, skill;" Old Norse kraptr "strength, virtue"). Sense expanded in Old English to include "skill, dexterity; art, science, talent" (via a notion of "mental power"), which led by late Old English to the meaning "trade, handicraft, calling," also "something built or made." The word still was used for "might, power" in Middle English.
    Methdology (Method)
    And for good measure:
    early 15c., "regular, systematic treatment of disease," from Latin methodus "way of teaching or going," from Greek methodos "scientific inquiry, method of inquiry, investigation," originally "pursuit, a following after," from meta- "after" (see meta-) + hodos "a traveling, way" (see cede). Meaning "way of doing anything" is from 1580s; that of "orderliness, regularity" is from 1610s. In reference to a theory of acting associated with Russian director Konstantin Stanislavsky, it is attested from 1923.
    Common Themes
    Looking at the etymology of these words reveals some common themes (created using the amazing online mind-mapping tool MindMup):

    What associations do we glean from this?
    • Science: Skill
    • Art: Craft, Skill
    • Craft: Art, Skill, Science
    • Methodology: Science
    Interestingly, the term "engineer" does not directly associate with the etymology of science, art, craft, or methodology, but one could say those concepts are facets of "engineering" (diagram created with the old horse Visio):

    This is arbitrary and doesn't mean that we're implying right away that software development is engineering, but it does give one an idea of what it might mean to call oneself a "software engineer."

    The Scientific Method

    How many software developers could actually, off the top of their head, describe the scientific method?  Here is one description:

    The scientific method is an ongoing process, which usually begins with observations about the natural world. Human beings are naturally inquisitive, so they often come up with questions about things they see or hear and often develop ideas (hypotheses) about why things are the way they are. The best hypotheses lead to predictions that can be tested in various ways, including making further observations about nature. In general, the strongest tests of hypotheses come from carefully controlled and replicated experiments that gather empirical data. Depending on how well the tests match the predictions, the original hypothesis may require refinement, alteration, expansion or even rejection. If a particular hypothesis becomes very well supported a general theory may be developed.
    Although procedures vary from one field of inquiry to another, identifiable features are frequently shared in common between them. The overall process of the scientific method involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions. A hypothesis is a conjecture, based on knowledge obtained while formulating the question. The hypothesis might be very specific or it might be broad. Scientists then test hypotheses by conducting experiments. Under modern interpretations, a scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
    We can summarize this as an iterative process of:
    1. Observation
    2. Question
    3. Hypothesize
    4. Create tests
    5. Gather data
    6. Revise hypothesis (go to step 3)
    7. Develop general theories and test for consistency with other theories and existing data

    Software Development: Knowledge Acquisition and Prototyping / Proof of Concept
    We can map the scientific method steps into the process of the software development that includes knowledge acquisition and prototyping:
    1. Observe current processes.
    2. Ask questions about current processes to establish patterns.
    3. Hypothesize how those processes can be automated and/or improved.
    4. Create some tests that measure success/failure and performance/accuracy improvement.
    5. Gather some test data for our tests.
    6. Create some prototypes / proof of concepts and revise our hypotheses as a result of feedback as to whether the new processes meet existing process requirements, are successful, and/or improve performance/accuracy.
    7. Abstract the prototypes into general solutions and verify that they are consistent with other processes and data.

    Where we Fail
    Software development actually fails at most or all of these steps.  The scientific method is typically applied to the observation of nature.  As with nature, our observations sometimes come to the wrong conclusions.  Just as we observe the sun going around the earth and erroneously come up with a geocentric theory of the earth, sun, moon, and planets, our understanding of processes is fraught with errors and omissions.  As with observing nature, we eventually hit the hard reality that what we understood about the process is incomplete or incorrect.  One pitfall is that, unlike nature, we also have the drawback that as we're prototyping and trying to prove that our new software processes are better, the old processes are also evolving, so by the time we publish the application, it is, like a new car being driven off the lot, already obsolete.
    Also, the software development process in general, and the knowledge acquisition phase in specific, typically doesn't determine how to measure success/failure and performance improvement/accuracy of an existing process, simply because we, as developers, lack the tools and resources to measure existing processes (there are exceptions of course.)  We are, after all, attempting to replace a process, not just understand an existing, immutable process and build on that knowledge.  How do we accurately measure the cost/time savings of a new process when we can barely describe an existing process?  How do we compare the training required to efficiently utilize a new process when everyone has "cultural knowledge" of the old process?  How do we overcome the resistance to new technology?  Nature doesn't care if we improve on a biological process by creating vaccines and gene therapies to cure cancer, but secretaries and entrenched engineers alike do very much care about new processes that require new and different skills and potentially threaten to eliminate their jobs.
    And frequently enough, the idea of how an existing process can be improved comes from incomplete (or completely omitted) observation and questions, but begins at step #3, imagining some software solution that "fixes" whatever concept the manager or CEO thinks can be improved upon, and yes, in many cases, the engineer thinks can be improved upon--we see this in the glut of new frameworks, new languages and forked GitHub repos that all claim to improve upon someone else's solution to a supposedly broken process.  The result is a house of cards of poorly documented and buggy implementations.  The result is why Stack Overflow exists.
    As to the last step, abstracting the prototypes into general solutions, this is the biggest failure of all of software development -- re-use.  As Douglas Schmidt wrote in C++ Report magazine, in 1999 (source):
    Although computing power and network bandwidth have increased dramatically in recent years, the design and implementation of networked applications remains expensive and error-prone. Much of the cost and effort stems from the continual re-discovery and re-invention of core patterns and framework components throughout the software industry.
    Note that he wrote 17 years ago (at the time of this article) and it still remains true today.
    Reuse has been a popular topic of debate and discussion for over 30 years in the software community. Many developers have successfully applied reuse opportunistically, e.g., by cutting and pasting code snippets from existing programs into new programs. Opportunistic reuse works fine in a limited way for individual programmers or small groups. However, it doesn't scale up across business units or enterprises to provide systematic software reuse. Systematic software reuse is a promising means to reduce development cycle time and cost, improve software quality, and leverage existing effort by constructing and applying multi-use assets like architectures, patterns, components, and frameworks.
    Like many other promising techniques in the history of software, however, systematic reuse of software has not universally delivered significant improvements in quality and productivity.
    The Silver Lining, Sort Of
    Now granted, we can install a variety of flavors of Linux on everything from desktop computers to single board computers like the Raspberry Pi or Beaglebone.  Cross-compilers let us compile and re-use code on multiple processors and hardware architectures.  .NET Core is open source, running on Windows, Mac, and Linux.  And script languages like Python, Ruby, and Javascript hide the low level compiled implementations in the subconscious mind of the program, enabling code portability for our custom applications.  However, we are still stuck in the realm of, as Mr. Schmidt puts it: "Component- and framework-based middleware technologies."  Using those technologies, we still have to "re-discover and re-invent" much of the glue that binds these components.

    The Engineering Method

    What is engineering?
    Engineering is the application of mathematics, empirical evidence and scientific, economic, social, and practical knowledge in order to invent, innovate, design, build, maintain, research, and improve structures, machines, tools, systems, components, materials, processes and organizations. (source)

    Seriously?  And we have the hubris to call what we do software engineering?
    "Application of" can only assume a deep understanding of whatever items in the list are applicable to what we're building.
    "In order to" can only assume proficiency in the set of necessary skills.
    And the "New stuff" certainly assumes that 1) it works, 2) it works well enough to be used, and 3) people will want it and know how to use it.
    Engineering implies knowledge, skill, and successful adoption of the end product.  To that end, there are formal methodologies that have been developed, for example, the Department of Energy Systems Engineering Methodology (SEM):
    The Department of Energy Systems Engineering Methodology (SEM) provides guidance for information systems engineering, project management, and quality assurance practices and procedures. The primary purpose of the methodology is to promote the development of reliable, cost-effective, computer-based solutions while making efficient use of resources. Use of the methodology will also aid in the status tracking, management control, and documentation efforts of a project. (source)
    Engineering Process Overview

    Notice some of the language:
    • to promote the development of:
    • reliable
    • cost-effective
    • computer-based solutions
    • the methodology will also aid in the:
    • status tracking
    • management control
    • and documentation efforts
    This actually sounds like an attempt to apply the scientific method successfully to software development.
    Another Engineering Model - the Spiral Development Model
    There are many engineering models to choose from, but here is one more, the Spiral Development Model.  It consists of Phases, Reviews, and Iterations (source):
    • Phases
    • Review Milestones Process
    • Iterations
    • Inception Phase
    • Elaboration Phase
    • Construction Phase
    • Transition Phase
    • Life Cycle Objectives Review
    • Life Cycle Architecture Review
    • Initial Operating Capability Review
    • Product Readiness Review + Functional Configuration Audit
    • Iteration Structure Description: Composition of an 8-Week Iteration
    • Design Period Activities
    • Development Period Activities
    • Wrap-Up Period Activities

    In each phase, the Custom Engineering Group (CEG) works closely with the customer to establish clear milestones to measure progress and success. (source)
    This harkens a wee bit to Agile's "customer collaboration", but notice again the concepts of:
    • a project plan
    • a test plan
    • supporting documentation
    • customer sign-off of specifications
    And the formality implied in:
    • investigation
    • design
    • development
    • test and user acceptance
    True software engineering practically demands skilled professionals, whether they are developers, managers, technical writers, contract negotiators, etc.  Certainly there is room for the people of all skill levels, but the structure of a real engineering team is such that "juniors" are mentored, monitored, and given appropriate tasks for their skill level and in fact, even "senior" people review each other's work.
    Where we Fail
    Instead, we have Agile Development and its Manifesto (source, bold is mine):
    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan
    How can we fail to conclude that Agile Development is anything but engineering?
    The Agile Manifesto appears to specifically de-emphasizes a scientific method for software development, and it also de-emphasizes the skills actual engineering requires of both software developers and managers, instead emphasizing an ill-defined psychological approach to software development involving people, interactions, collaboration, and flexibility.  While these are necessary skills as well, they are not more important over the skills and formal processes that software development requires.  In fact, Agile creates an artificial tension between the two sides of each of the bullet points above, often leading to an extreme adoption of the left side, for example, "the code is the documentation."
    From real world experience, the Manifesto would be better written replacing the word "over" with "and":
    • Individuals and interactions and processes and tools
    • Working software and comprehensive documentation
    • Customer collaboration and contract negotiation
    • Responding to change and following a plan
    However this approach is rarely embraced by over-eager coders that want to dive into a new-fangled technology and start coding, and it also isn't embraced by managers that view "over" as (incorrectly) reducing cost and time to market, vs. "and" as (incorrectly) increasing cost and time to market.  As with any approach, a balance must be struck that is appropriate for the business domain and product to be delivered, but rarely does that conversation happen.
    Regardless, Agile Development is not engineering.  But is it a methodology?

    Software Development Methodologies: Are They Really?

    As Chris Hohmann writes:
    [I] chose the Macmillan online dictionary and found this:

    Approach (noun): a particular way of thinking about or dealing with something
    Philosophy: a system of beliefs that influences someone’s decisions and behaviour. A belief or attitude that someone uses for dealing with life in general
    Methodology: the methods and principles used for doing a particular kind of work, especially scientific or academic research
    Do we view whatever structure we impose on the software development process as an approach, a philosophy, or a methodology?
    So-Called Methodologies
    Wikipedia has a page here and here that lists dozens of philosophies and IT Knowledge Portal lists 13 software development methodologies here:
    1. Agile
    2. Crystal
    3. Dynamic Systems Development
    4. Extreme Programming
    5. Feature Drive Development
    6. Joint Application Development
    7. Lean Development
    8. Rapid Application Development
    9. Rational Unified Process
    10. Scrum
    11. Spiral (not the Spiral Development Model described earlier)
    12. System Development Life Cycle
    13. Waterfall (aka Traditional)
    And even this list is missing so-called methodologies like Test Driven Development and Model Driven Development.
    As methodology derives from Greek methodos: "scientific inquiry, method of inquiry, investigation", the definition of Methodology from Macmillan seems reasonable enough.  Let's look at two of the most common (and seen as opposing) so-called methodologies, Waterfall and Agile.
    The waterfall model is a sequential (non-iterative) design process, used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of conception, initiation, analysis, design, construction, testing, production/implementation and maintenance.  The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Because no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development..
    The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce, although Royce did not use the term waterfall in that article. Royce presented this model as an example of a flawed, non-working model; which is how the term is generally used in writing about software development—to describe a critical view of a commonly used software development practice.
    The earliest use of the term "waterfall" may have been in a 1976 paper by Bell and Thayer.
    In 1985, the United States Department of Defense captured this approach in DOD-STD-2167A, their standards for working with software development contractors, which stated that "the contractor shall implement a software development cycle that includes the following six phases: Preliminary Design, Detailed Design, Coding and Unit Testing, Integration, and Testing." (source)
    Waterfall might actually be categorized as an approach, as there is no specific guidance for the six phases mentioned above -- they need to be part of the development cycle, but any specific scientific rigor applied to each of the phases is entirely missing.  It is ironic that the process originates from manufacturing and construction industries, where there is often considerable mathematical analysis / modeling performed before construction begins.  And certainly in the 1960's, construction of electronic hardware was expensive, and again, rigorous electrical circuit analysis using scientific methods would greatly mitigate the cost of after-the-fact changes.
    There are a variety of diagrams for Agile Software Development, I've chosen one that maps somewhat to the "Scientific Method" above:

    Ironically, IT Knowledge Portal describes Agile as "a conceptual framework for undertaking software engineering projects."  It's difficult to understand how a conceptual framework can be described as a methodology.  It seems at best to be an approach.  More telling though is that the term "methodology" appears to have no foundation in "scientific inquiry, method of inquiry, investigation" when it comes to the software development process.

    Software Development as a Craft

    A craft is a pastime or a profession that requires particular skills and knowledge of skilled work. (source)
    Historically, one's skill at a craft was described in three levels:
    1. Apprentice
    2. Journeyman
    3. Master Craftsman
    When we look at software development as a craft, one of the advantages is that we move away from the Scientific Method and its emphasis on the natural world and physical processes.  We also move away from the conundrum of applying an approach, philosophy, or methodology to the software development process.  Instead, viewing software development as a craft emphasizes the skill of the craftsman (or woman.)  We can also begin to establish rankings of "craft skill" with the general skills discussed above in the section on Engineering.
    An apprenticeship is a system of training a new generation of practitioners of a trade or profession with on-the-job training and often some accompanying study (classroom work and reading). Apprenticeship also enables practitioners to gain a license to practice in a regulated profession. Most of their training is done while working for an employer who helps the apprentices learn their trade or profession, in exchange for their continued labor for an agreed period after they have achieved measurable competencies. Apprenticeships typically last 3 to 6 years. People who successfully complete an apprenticeship reach the "journeyman" or professional certification level of competence. (source)
    A "journeyman" is a skilled worker who has successfully completed an official apprenticeship qualification in a building trade or craft. They are considered competent and authorized to work in that field as a fully qualified employee. A journeyman earns their license through education, supervised experience, and examination. [1] Although a journeyman has completed a trade certificate and is able to work as an employee, they are not yet able to work as a self-employed master craftsman.[2] The term journeyman was originally used in the medieval trade guilds. Journeymen were paid each day, and this is where the word ‘journey’ derived from- journée meaning ‘a day’ in French. Each individual guild generally recognized three ranks of workers; apprentices, journeymen, and masters. A journeyman, as a qualified tradesman could become a master, running their own business although most continued working as employees. (source)
    Master Craftsman
    A master craftsman or master tradesman (sometimes called only master or grandmaster) was a member of a guild. In the European guild system, only masters and journeymen were allowed to be members of the guild.  An aspiring master would have to pass through the career chain from apprentice to journeyman before he could be elected to become a master craftsman. He would then have to produce a sum of money and a masterpiece before he could actually join the guild. If the masterpiece was not accepted by the masters, he was not allowed to join the guild, possibly remaining a journeyman for the rest of his life.  Originally, holders of the academic degree of "Master of Arts" were also considered, in the Medieval universities, as master craftsmen in their own academic field. (source)
    Where do we Fit in This Scheme?
    While we all begin as apprentices and our "training" is often through the anonymity of books and the Internet as well as code camps, our peers, and looking at other people's code, we eventually develop various technical and interpersonal skills.  And software development is somewhat unique as a tradecraft because we are always at different levels depending on the technology that we are attempting to use.  So when we use the terms "senior developer," we're really making an implied statement about what skills we have that rank us at least as a journeyman.  One of the common threads of the three levels of a tradecraft is that there is a master craftsman working with the apprentice and journeyman, and in fact certifies the person to move on to the next step.  Also, becoming a master craftsman required developing a "masterpiece," which is ironic because I know a few software developers that consider themselves "senior" and only have failed software projects under their belt.
    And as Donald Knuth wrote in his preface to The Art of Computer Programming: This book is...designed to train the reader in various skills that go into a programmer's craft."
    Where we Fail
    Analogies to the software development process includes code reviews, peer programming, and even employee vs. contractor or business owner.  Unfortunately, the craft model of software development is not really adopted, perhaps because employers don't want to use a system that harks back at least to the medieval days.  Code reviews, if they are done at all, can be very superficial and not part of a systematic development process.  Peer programming is usually considered a waste of resources.  Companies rarely provide the equivalent of a "certificate of accomplishment," which would be particularly beneficial when people are no longer in school and they are learning skills that school never taught them (the entire Computer Science degree program is questionable as well in its ability to actually producing, at a minimum, a journeyman in software development.)
    Furthermore, given the newness of the software development field and the rapidly changing tools and technologies, the opportunity to work with a master craftsman is often hard to come by.  In many situations, one can at best apply general principles gleaned from skills in other technologies to the apprentice work in a new technology (besides the time honored practice of "hitting the books" and looking up simple problems on Stack Overflow.)  In many cases, the process of becoming a journeyman or master craftsman is a bootstrapping process with ones peers.

    Software Development as an Art

    (With apologies to the reader, but Knuth states the matter so eloquently, I cannot express it better.)
    Quoting Donald Knuth again: "The process of preparing programs for a digital computer is especially attractive, not only because it can be economically and scientifically rewarding, but also because it can be an aesthetic experience much like composing poetry or music." and: "I have tried to include all of the known ideas about sequential computer programming that are both beautiful and easy to state."
    So there is an aesthetic experience associated with programming, and programs can have the qualities of being beautiful, perhaps where the aesthetic experience, the beauty of a piece of code, lies in its ability to easily describe (or self-describe) the algorithm.  In his excellent talk on Computer Programming as an Art (online PDF), Knuth states: "Meanwhile we have actually succeeded in making our discipline a science, and in a remarkably simple way: merely by deciding to call it 'computer science.'"  He makes a salient statement about science vs. art: "Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with.  Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something."
    What a concise description of software development as an art - the process of learning something sufficiently to automate it!  But there's more: "...we should continually be striving to transform every art into a science: in the process, we advance the art."  An intriguing notion.  He also quotes Emma Lehmer (11/6/1906 - 5/7/2007, mathematician), when she wrote in 1956 that she had found coding to be "an exacting science as well as an intriguing art."  Knuth states that "The chief goal of my work as educator and author is to help people learn how to write beautiful programs...The possibility of writing beautiful programs, even in assembly language, is what got me hooked on programming in the first place."
    And: "Some programs are elegant, some are exquisite, some are sparkling.  My claim is that it is possible to writegrand programs, noble programs, truly magnificent ones!"
    Knuth then continues with some specific elements of the aesthetics of a program:
    • It of course has to work correctly.
    • It won't be hard to change.
    • It is easily readable and understandable.
    • It should interact gracefully with its users, especially when recovering from human input errors and in providing meaningful error message or flexible input formats.
    • It should use a computer's resources efficiently (but in the write places and at the right times, and not prematurely optimizing a program.)
    • Aesthetic satisfaction is accomplished with limited tools.
    • "The use of our large-scale machines with their fancy operating systems and languages doesn't really seem to engender any love for programming, at least not at first."
    • "...we should make use of the idea of limited resources in our own education."
    • "Please, give us tools that are a pleasure to use, especially for our routine assignments, instead of providing something we have to fight with.  Please, give us tools that encourage us to write better programs, by enhancing our pleasure when we do so."
    Knuth concludes his talk with: "We have seen that computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty.  A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better."
    Reading what Knuth wrote, one can understand the passion that was experienced with the advent of the 6502 processor and the first Apple II, Commodore PET, and TRS-80.  One can see the resurrection of that passion with single board computers such as the rPi, Arduino, and Beaglebone.  In fact, whenever any new technology appears on the horizon (for example, the Internet of Things) we often see the hobbyist passionately embracing the idea.  Why?  Because as a new idea, it speaks to our imagination.  Similarly, new languages and frameworks ignite passion with some programmers, again because of that imaginative, creative quality that something new and shiny holds.
    Where we Fail
    Unfortunately, we also fail at bringing artistry to software development.  Copy and paste is not artistic.  An inconsistent styling of code is not beautiful.  An operating system that produces a 16 byte hex code for an error message is not graceful.  The nightmare of NPM (as an example) dependencies (a great example here) is not elegant, beautiful, and certainly is not a pleasure to use.  Knuth reminds us that programming should for the most part be pleasurable.  So why have we created languages like VB and Javascript that we love to hate, use syntaxes like HTML and CSS that are complex and not fully supported across all the flavors of browsers and browser versions, and why do we put up with them?  But worse, why do we ourselves write ugly, unreadable, un-maintainable code?  Did we ever even look at programming from an aesthetic point of view?  It is unfortunate that in a world of deadlines and performance reviews we seem to have lost (if we ever had it to begin with) the ability to program artistically.

    Conclusion: What Then is the Software Development Process?

    Most software development processes attempt to replicate well-developed processes of working with nature and physical construction.  This is the wrong approach.  Software usually does not involve things where everyone can (usually) agree on the observations and "science."  Rather, software development involves people (along with their foibles) and concepts (of which people often disagree.)  One distinguishing aspect is when software is designed to control hardware (a thing) whether it's the cash dispenser of an ATM, a self-driving car, or a spacecraft.  Because the "thing" is then well known, the software development process has a firm foundation for construction.  However, much in software development lacks that firm foundation.  The debate over which methodology is best is inappropriate because none of the so-called methodologies put forward over the years are actual methodologies.  Discussing approaches and philosophy might be fun over a beer but does it really result in advancing the process?
    In fact, the question of defining a software development process is wrong because there simply is no right answer.  Emphasis should be placed on the skills of the developers themselves and how they transform the unknown into the known, with artistry.  If anything, the software development process needs to be approached like a craft, in which skilled people mentor the less skilled and in which masters of the craft recognize, in some tangible manner, the "leveling up" of an apprentice or journeyman.  The "process" is best described as a transformation of art into science through formalization and knowledge that can be shared with others.  When in the process, the work that we do must be personally satisfying.  While there is satisfaction in "it works," the true artisan finds personal satisfaction in also creating something that is aesthetically pleasing, whether it is (to name a few) in their coding style, writing a beautiful algorithm, or applying some technology or language feature elegantly to solve a complex problem.  When we critique someone else's work, it is not sufficient to count how many unit tests pass.  A true masterpiece includes the process as well as the code and the user experience, all of which should combine aspects of artistry and science.  If we looked at software development in this way, we might eventually get to a better place, where processes can actually be called methodologies and with languages and tools that we can say were truly engineered.
    What Then is A Senior Developer or a Software Engineer?
    Perhaps: Someone who is senior or calls themselves a software engineer is able to apply scientific methods and has formal methodologies for their work process, demonstrates skill in the domain, tools, and languages, would be considered a master craftsman (ie, proven track record, ability to teach others, etc.) and also treats development as an art, meaning that it requires creativity, imagination, the ability to think outside of the box of said skills, and that those skills are applied with an aesthetic sense.