Search results for: artificial animal intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3634

Search results for: artificial animal intelligence

2074 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity

Procedia PDF Downloads 263
2073 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction

Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota

Abstract:

Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.

Keywords: accident risks estimation, artificial neural network, deep learning, k-mean, road safety

Procedia PDF Downloads 146
2072 Informing Lighting Designs Through a Comprehensive Review of Light Pollution Impacts

Authors: Stephen M. Simmons, Stuart W. Baur, William L. Gillis

Abstract:

In recent years, increasing concern has been shown towards the issue of light pollution, especially with the spread of brighter, more blue-rich LED bulbs. Much research has been conducted in order to study the effects of artificial light at night, and many adverse impacts have been discovered, such as circadian disruption, degradation of the night sky, and interference oftheprocesses and behaviors of plants and animals. Despite a plethora of informationin the literature regarding the numerous illeffects of this type of pollution, there does not appear to be a complete summary of these impacts, including their magnitudes, which would facilitate the balancing of risks and benefits in the design of an exterior lighting system. This paperprovides a comprehensive review of the known impacts of light pollution, divided into four categories - human health, night sky, plants, and animals; additionally, it includes a synopsis of what likely remains unknown at this point in time. This review will attempt to showcase the relative significance of differentimpacts within each category, as well as their sensitivity to changes in lighting specifications (brightness, color temperature, shielding, and mounting height). Methods to be employed in this research include an extensive literature review and the gathering of expert knowledge and opinions. The findings of this review will be used to inform the creation of an optimized lighting design for the Missouri University of Science and Technology campus. It is hoped that future research willexplore the known impacts of light pollution further, as well as search for what still remains to be found regarding the consequencesof artificial light at night.

Keywords: comprehensive review, impacts, light pollution, lighting design, literature review

Procedia PDF Downloads 125
2071 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process

Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.

Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization

Procedia PDF Downloads 103
2070 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems

Authors: Zahid Ullah, Atlas Khan

Abstract:

This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.

Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms

Procedia PDF Downloads 103
2069 Prediction of Slaughter Body Weight in Rabbits: Multivariate Approach through Path Coefficient and Principal Component Analysis

Authors: K. A. Bindu, T. V. Raja, P. M. Rojan, A. Siby

Abstract:

The multivariate path coefficient approach was employed to study the effects of various production and reproduction traits on the slaughter body weight of rabbits. Information on 562 rabbits maintained at the university rabbit farm attached to the Centre for Advanced Studies in Animal Genetics, and Breeding, Kerala Veterinary and Animal Sciences University, Kerala State, India was utilized. The manifest variables used in the study were age and weight of dam, birth weight, litter size at birth and weaning, weight at first, second and third months. The linear multiple regression analysis was performed by keeping the slaughter weight as the dependent variable and the remaining as independent variables. The model explained 48.60 percentage of the total variation present in the market weight of the rabbits. Even though the model used was significant, the standardized beta coefficients for the independent variables viz., age and weight of the dam, birth weight and litter sizes at birth and weaning were less than one indicating their negligible influence on the slaughter weight. However, the standardized beta coefficient of the second-month body weight was maximum followed by the first-month weight indicating their major role on the market weight. All the other factors influence indirectly only through these two variables. Hence it was concluded that the slaughter body weight can be predicted using the first and second-month body weights. The principal components were also developed so as to achieve more accuracy in the prediction of market weight of rabbits.

Keywords: component analysis, multivariate, slaughter, regression

Procedia PDF Downloads 159
2068 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling

Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha

Abstract:

The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.

Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat

Procedia PDF Downloads 42
2067 Comparative Scanning Electron Microscopic Observations of Anthelminthic Effect of Trigonella foenum-graecum on Paramphistomum cervi in Buffalo

Authors: Kiran Roat, Bhanupriya Sanger, Gayatri Swarnakar

Abstract:

Amphistomiasis disease is the main health problem throughout of the world and responsible for great economic losses to cattle industries, mostly to poor cattle farmers in developing countries. Among the rumen parasites, the Paramphistomum cervi were collected from the rumen of freshly slaughtered buffalo for the further treatment process. Trigonella foenum-graecum is commonly known as methi and fenugreek and their seeds are known for their therapeutic value. The present study was considered to evaluate in vitro efficacy of aqueous extract of Trigonella foenum-graecum on P. cervi. 130 mg/ml concentration of aqueous extract shows total mortality of P. cervi at 5 hours. The ultrastructural surface topography of untreated animal was compared with a treated animal by scanning electron microscope (SEM). The body of untreated P. cervi in conical shape, tegumental surface is highly ridged with transverse folds and present abundance number of papillaes. Observations demonstrated that the body of treated P. cervi become shrunken & elongated. Treated parasite shows the deep breakage in tegument and the disappearance of tegumental folds & papillae. Severe blebs formations have been found. Above findings, it can be concluded that the seeds of Trigonella foenum-graecum can be used as an anthelminthic agent to eliminate P. cervi from the body of buffalo.

Keywords: Paramphistomum cervi, Trigonella foenum-graecum, scanning electron microscope, buffalo

Procedia PDF Downloads 237
2066 Contribution of Artificial Intelligence in the Studies of Natural Compounds Against SARS-COV-2

Authors: Salah Belaidi

Abstract:

We have carried out extensive and in-depth research to search for bioactive compounds based on Algerian plants. A selection of 50 ligands from Algerian medicinal plants. Several compounds used in herbal medicine have been drawn using Marvin Sketch software. We determined the three-dimensional structures of the ligands with the MMFF94 force field in order to prepare these ligands for molecular docking. The 3D protein structure of the SARS-CoV-2 main protease was taken from the Protein Data Bank. We used AutoDockVina software to apply molecular docking. The hydrogen atoms were added during the molecular docking process, and all the twist bonds of the ligands were added using the (ligand) module in the AutoDock software. The COVID-19 main protease (Mpro) is a key enzyme that plays a vital role in viral transcription and mediating replication, so it is a very attractive drug target for SARS-CoV-2. In this work, an evaluation was carried out on the biologically active compounds present in these selected medicinal plants as effective inhibitors of the protease enzyme of COVID-19, with an in-depth computational calculation of the molecular docking using the Autodock Vina software. The top 7 ligands: Phloroglucinol, Afzelin, Myricetin-3-O- rutinosidTricin 7-neohesperidoside, Silybin, Silychristinthat and Kaempferol are selected among the 50 molecules studied which are Algerian medicinal plants, whose selection is based on the best binding energy which is relatively low compared to the reference molecule with binding affinities of -9.3, -9.3, -9, -8.9, -8 .5, 8.3 and -8.3 kcal mol-1 respectively. Then, we analyzed the ADME properties of the best7 ligands using the web server SwissADME. Two ligands (Silybin, Silychristin) were found to be potential candidates for the discovery and design of novel drug inhibitors of the protease enzyme of SARS-CoV-2. The stability of the two ligands in complexing with the Mpro protease was validated by molecular dynamics simulation; they revealed a stable trajectory in both techniques, RMSD and RMSF, by showing molecular properties with coherent interactions in molecular dynamics simulations. Finally, we conclude that the Silybin ligand forms a more stable complex with the Mpro protease compared to the Silychristin ligand.

Keywords: COVID-19, medicinal plants, molecular docking, ADME properties, molecular dynamics

Procedia PDF Downloads 11
2065 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence

Authors: Mohammed Al Sulaimani, Hamad Al Manhi

Abstract:

With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.

Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems

Procedia PDF Downloads 22
2064 Artificial Intelligence Impact on the Australian Government Public Sector

Authors: Jessica Ho

Abstract:

AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.

Keywords: artificial inteligence, machine learning, rules, governance, government

Procedia PDF Downloads 63
2063 The Short-Term Stress Indicators in Home and Experimental Dogs

Authors: Madara Nikolajenko, Jevgenija Kondratjeva

Abstract:

Stress is a response of the body to physical or psychological environmental stressors. Cortisol level in blood serum is determined as the main indicator of stress, but the blood collection, the animal preparation and other activities can cause unpleasant conditions and induce increase of these hormones. Therefore, less invasive methods are searched to determine stress hormone levels, for example, by measuring the cortisol level saliva. The aim of the study is to find out the changes of stress hormones in blood and saliva in home and experimental dogs in simulated short-term stress conditions. The study included clinically healthy experimental beagle dogs (n=6) and clinically healthy home American Staffordshire terriers (n=6). The animals were let into a fenced area to adapt. Loud drum sounds (in cooperation with 'Andžeja Grauda drum school') were used as a stressor. Blood serum samples were taken for sodium, potassium, glucose and cortisol level determination and saliva samples for cortisol determination only. Control parameters were taken immediately before the start of the stressor, and next samples were taken immediately after the stress. The last measurements were taken two hours after the stress. Electrolyte levels in blood serum were determined using direction selective electrode method (ILab Aries analyzer) and cortisol in blood serum and saliva using electrochemical luminescence method (Roche Diagnostics). Blood glucose level was measured with glucometer (ACCU-CHECK Active test strips). Cortisol level in the blood increased immediately after the stress in all home dogs (P < 0,05), but only in 33% (P < 0,05) of the experimental dogs. After two hours the measurement decreased in 83% (P < 0,05) of home dogs (in 50% returning to the control point) and in 83% (P < 0,05) of the experimental dogs. Cortisol in saliva immediately after the stress increased in 50% (P > 0,05) of home dogs and in 33% (P > 0,05) of the experimental dogs. After two hours in 83% (P > 0,05) of the home animals, the measurements decreased, only in 17% of the experimental dogs it decreased as well, while in 49% measurement was undetectable due to the lack of material. Blood sodium, potassium, and glucose measurements did not show any significant changes. The combination of short-term stress indicators, when, after the stressor, all indicators should immediately increase and decrease after two hours, confirmed in none of the animals. Therefore the authors can conclude that each animal responds to a stressful situation with different physiological mechanisms and hormonal activity. Cortisol level in saliva and blood is released with the different speed and is not an objective indicator of acute stress.

Keywords: animal behaivor, cortisol, short-term stress, stress indicators

Procedia PDF Downloads 257
2062 Identification and Characterisation of Oil Sludge Degrading Bacteria Isolated from Compost

Authors: O. Ubani, H. I. Atagana, M. S. Thantsha, R. Adeleke

Abstract:

The oil sludge components (polycyclic aromatic hydrocarbons, PAHs) have been found to be cytotoxic, mutagenic and potentially carcinogenic and microorganisms such as bacteria and fungi can degrade the oil sludge to less toxic compounds such as carbon dioxide, water and salts. In the present study, we isolated different bacteria with PAH-degrading potentials from the co-composting of oil sludge and different animal manure. These bacteria were isolated on the mineral base medium and mineral salt agar plates as a growth control. A total of 31 morphologically distinct isolates were carefully selected from 5 different compost treatments for identification using polymerase chain reaction (PCR) of the 16S rDNA gene with specific primers (16S-P1 PCR and 16S-P2 PCR). The amplicons were sequenced and sequences were compared with the known nucleotides from the gene bank database. The phylogenetical analyses of the isolates showed that they belong to 3 different clades namely Firmicutes, Proteobacteria and Actinobacteria. These bacteria identified were closely related to genera Bacillus, Arthrobacter, Staphylococcus, Brevibacterium, Variovorax, Paenibacillus, Ralstonia and Geobacillus species. The results showed that Bacillus species were more dominant in all treated compost piles. Based on their characteristics these bacterial isolates have high potential to utilise PAHs of different molecular weights as carbon and energy sources. These identified bacteria are of special significance in their capacity to emulsify the PAHs and their ability to utilize them. Thus, they could be potentially useful for bioremediation of oil sludge and composting processes.

Keywords: bioaugmentation, biodegradation, bioremediation, composting, oil sludge, PAHs, animal manures

Procedia PDF Downloads 244
2061 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall

Procedia PDF Downloads 268
2060 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 239
2059 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation

Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran

Abstract:

Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.

Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning

Procedia PDF Downloads 483
2058 Sterols Regulate the Activity of Phospholipid Scramblase by Interacting through Putative Cholesterol Binding Motif

Authors: Muhasin Koyiloth, Sathyanarayana N. Gummadi

Abstract:

Biological membranes are ordered association of lipids, proteins, and carbohydrates. Lipids except sterols possess asymmetric distribution across the bilayer. Eukaryotic membranes possess a group of lipid translocators called scramblases that disrupt phospholipid asymmetry. Their action is implicated in cell activation during wound healing and phagocytic clearance of apoptotic cells. Cholesterol is one of the major membrane lipids distributed evenly on both the leaflet and can directly influence the membrane fluidity through the ordering effect. The fluidity has an impact on the activity of several membrane proteins. The palmitoylated phospholipid scramblases localized to the lipid raft which is characterized by a higher number of sterols. Here we propose that cholesterol can interact with scramblases through putative CRAC motif and can modulate their activity. To prove this, we reconstituted phospholipid scramblase 1 of C. elegans (SCRM-1) in proteoliposomes containing different amounts of cholesterol (Liquid ordered/Lo). We noted that the presence of cholesterol reduced the scramblase activity of wild-type SCRM-1. The interaction between SCRM-1 and cholesterol was confirmed by fluorescence spectroscopy using NBD-Chol. Also, we observed loss of such interaction when one of I273 in the CRAC motif mutated to Asp. Interestingly, the point mutant has partially retained scramblase activity in Lo vesicles. The current study elucidated the important interaction between cholesterol and SCRM-1 to fine-tune its activity in artificial membranes.

Keywords: artificial membranes, CRAC motif, plasma membrane, PL scramblase

Procedia PDF Downloads 171
2057 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions

Authors: Catherine Peyrols Wu

Abstract:

Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.

Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork

Procedia PDF Downloads 158
2056 Agricultural Mechanization for Transformation

Authors: Lawrence Gumbe

Abstract:

Kenya Vision 2030 is the country's programme for transformation covering the period 2008 to 2030. Its objective is to help transform Kenya into a newly industrializing, middle-income, exceeding US$10000, country providing a high quality of life to all its citizens by 2030, in a clean and secure environment. Increased agricultural and production and productivity is crucial for the realization of Vision 2030. Mechanization of agriculture in order to achieve greater yields is the only way to achieve these objectives. There are contending groups and views on the strategy for agricultural mechanization. The first group are those who oppose the widespread adoption of advanced technologies (mostly internal combustion engines and tractors) in agricultural mechanization as entirely inappropriate in most situations in developing countries. This group argues that mechanically powered -agricultural mechanization often leads to displacement of labour and hence increased unemployment, and this results in a host of other socio-economic problems, amongst them, rural-urban migration, inequitable distribution of wealth and in many cases an increase in absolute poverty, balance of payments due to the need to import machinery, fuel and sometimes technical assistance to manage them. The second group comprises of those who view the use of the improved hand tools and animal powered technology as transitional step between the most rudimentary step in technological development (characterized by entire reliance on human muscle power) and the advanced technologies (characterized 'by reliance on tractors and other machinery). The third group comprises those who regard these intermediate technologies (ie. improved hand tools and draught animal technology in agriculture) as a ‘delaying’ tactic and they advocate the use of mechanical technologies as-the most appropriate. This group argues that alternatives to the mechanical technologies do not just exist as a practical matter, or, if they are available, they are inefficient and they cannot be compared to the mechanical technologies in terms of economics and productivity. The fourth group advocates a compromise between groups two and third above. This group views the improved hand tools and draught animal technology as more of an 18th century technology and the modem tractor and combine harvester as too advanced for developing countries. This group has been busy designing an ‘intermediate’, ‘appropriate’, ‘mini’, ‘micro’ tractor for use by farmers in developing countries. This paper analyses and concludes on the different agricultural mechanization strategies available to Kenya and other third world countries

Keywords: agriculture, mechanazation, transformation, industrialization

Procedia PDF Downloads 328
2055 Intrathecal: Not Intravenous Administration of Evans Blue Reduces Pain Behavior in Neuropathic Rats

Authors: Kun Hua O., Dong Woon Kim, Won Hyung Lee

Abstract:

Introduction: Neuropathic pain induced by spinal or peripheral nerve injury is highly resistant to common painkillers, nerve blocks, and other pain management approaches. Recently, several new therapeutic drug candidates have been developed to control neuropathic pain. In this study, we used the spinal nerve L5 ligation (SNL) model to investigate the ability of intrathecal or intravenous Evans blue to decrease pain behavior and to study the relationship between Evans blue and the neural structure of pain transmission. Method: Neuropathic pain (allodynia) of the left hind paw was induced by unilateral SNL in Sprague-Dawley rats(n=10) in each group. Evans blue (5, 15, 50μg/10μl) or phosphate buffer saline(PBS,10μl) was injected intrathecally at 3days post-ligation or intravenously(1mg/200 μl) 3days and 5days post-ligation . Mechanical sensitivity was assessed using Von Frey filaments at 3 days post-ligation and at 2 hours, days 1, 2, 3, 5,7 after intrathecal Evans blue injection, and on days 2, 4, 7, and 11 at 14 days after intravenous injection. In the intrathecal group, microglia and glutaminergic neurons in the dorsal horn and VNUT(vesicular nucleotide transporter) in the dorsal root ganglia were tested to evaluate co-staining with Evans blue. The experimental procedures were performed in accordance with the animal care guideline of the Korean Academy of Medical Science(Animal ethic committee of Chungnam National University Hospital: CNUH-014-A0005-1). Results: Tight ligation of the L5 spinal nerve induced allodynia in the left hind paw 3 days post-ligation. Intrathecal Evans blue most significantly(P<0.001) alleviated allodynia at 2 days after intrathecal, but not an intravenous injection. Glutaminergic neurons in the dorsal horn and VNUT in the dorsal root ganglia were co-stained with Evans blue. On the other hand, microglia in the dorsal horn were partially co-stained with Evans blue. Conclusion: We confirmed that Evans blue might have an analgesic effect through the central nervous system, not another system in neuropathic pain of the SNL animal model. These results suggest Evans blue may be a potential new drug for the treatment of chronic pain. This research was supported by the National Research Foundation of Korea (NRF-2020R1A2C100757512), funded by the Ministry of Education.

Keywords: neuropathic pain, Evas blue, intrathecal, intravenous

Procedia PDF Downloads 86
2054 A Good Start for Digital Transformation of the Companies: A Literature and Experience-Based Predefined Roadmap

Authors: Batuhan Kocaoglu

Abstract:

Nowadays digital transformation is a hot topic both in service and production business. For the companies who want to stay alive in the following years, they should change how they do their business. Industry leaders started to improve their ERP (Enterprise Resource Planning) like backbone technologies to digital advances such as analytics, mobility, sensor-embedded smart devices, AI (Artificial Intelligence) and more. Selecting the appropriate technology for the related business problem also is a hot topic. Besides this, to operate in the modern environment and fulfill rapidly changing customer expectations, a digital transformation of the business is required and change the way the business runs, affect how they do their business. Even the digital transformation term is trendy the literature is limited and covers just the philosophy instead of a solid implementation plan. Current studies urge firms to start their digital transformation, but few tell us how to do. The huge investments scare companies with blur definitions and concepts. The aim of this paper to solidify the steps of the digital transformation and offer a roadmap for the companies and academicians. The proposed roadmap is developed based upon insights from the literature review, semi-structured interviews, and expert views to explore and identify crucial steps. We introduced our roadmap in the form of 8 main steps: Awareness; Planning; Operations; Implementation; Go-live; Optimization; Autonomation; Business Transformation; including a total of 11 sub-steps with examples. This study also emphasizes four dimensions of the digital transformation mainly: Readiness assessment; Building organizational infrastructure; Building technical infrastructure; Maturity assessment. Finally, roadmap corresponds the steps with three main terms used in digital transformation literacy as Digitization; Digitalization; and Digital Transformation. The resulted model shows that 'business process' and 'organizational issues' should be resolved before technology decisions and 'digitization'. Companies can start their journey with the solid steps, using the proposed roadmap to increase the success of their project implementation. Our roadmap is also adaptable for relevant Industry 4.0 and enterprise application projects. This roadmap will be useful for companies to persuade their top management for investments. Our results can be used as a baseline for further researches related to readiness assessment and maturity assessment studies.

Keywords: digital transformation, digital business, ERP, roadmap

Procedia PDF Downloads 154
2053 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: brain-computer interface, speech recognition, artificial neural network, electroencephalography, EEG, wernicke area

Procedia PDF Downloads 264
2052 Comparative and Combined Toxicity of NiO and Mn₃O₄ Nanoparticles as Assessed in vitro and in vivo

Authors: Ilzira A. Minigalieva, Tatiana V. Bushueva, Eleonore Frohlich, Vladimir Panov, Ekaterina Shishkina, Boris A. Katsnelson

Abstract:

Background: The overwhelming majority of the experimental studies in the field of metal nanotoxicology have been performed on cultures of established cell lines, with very few researchers focusing on animal experiments, while a juxtaposition of conclusions inferred from these two types of research is blatantly lacking. The least studied aspect of this problem relates to characterizing and predicting the combined toxicity of metallic nanoparticles. Methods: Comparative and combined toxic effects of purposefully prepared spherical NiO and Mn₃O₄ nanoparticles (mean diameters 16.7 ± 8.2 nm and 18.4 ± 5.4 nm respectively) were estimated on cultures of human cell lines: MRC-5 fibroblasts, THP-1 monocytes, SY-SY5Y neuroblastoma cells, as well as on the latter two lines differentiated to macrophages and neurons, respectively. The combined cytotoxicity was mathematically modeled using the response surface methodology. Results: The comparative assessment of the studied NPs unspecific toxicity previously obtained in vivo was satisfactorily reproduced by the present in vitro tests. However, with respect to manganese-specific brain damage which had been demonstrated by us in animal experiment with the same NPs, the testing on neuronall cell culture showed only a certain enhancing effect of Mn₃O₄-NPs on the toxic action of NiO-NPs, while the role of the latter prevailed. Conclusion: From the point of view of the preventive toxicology, the experimental modeling of metallic NPs combined toxicity on cell cultures can give non-reliable predictions of the in vivo action’s effects.

Keywords: manganese oxide, nickel oxide, nanoparticles, in vitro toxicity

Procedia PDF Downloads 286
2051 The Combination of Curcuma Extract and IgG Colostrum on Strongyloides Infection in CD1 Mice

Authors: Laurentius J. M. Rumokoy, Jimmy Posangi, Wisje Lusia Toar, Julio Lopez Aban

Abstract:

The threat of pathogen infection agents to the neonates is a major health problem to the new born life livestock. Neonate losses became an important case in the world as well as in Indonesia. This condition can be triggered by an infection with nematode in conjunction with a failure of immunoglobulin passive transfer. The study was conducted to evaluate the role of the curcuma combined with IgG colostrum on the development of parasites in the gut of CD1 mice. Animal experiments were divided in four groups (G) based on the treatment: G1 (infection only); G2 (curcuma+infection), G3 (IgG + infection) and G4 (curcuma+IgG+infection). The parameters measured were EPG (eggs per gram) and female in the intestine. The results obtained showed that the treatment has no a significant influence on the number of eggs per gram of feces in the group infected compared to the control group without receiving IgG nor curcuma. However, the EGP response tended to decrease at day 6 in G3 and G4 with a minimum number at zero eggs. This performant showed that the immunoglobulin-G and curcuma substances could slightly decreased the number of eggs in animal infected with Strongyloides. The results obtained showed also that the treatment has no significant difference (P > 0.05) on female larva in the gut of MCD1 experimental. In other side, we found that the best performance to inhibit the female quantity in the gut was the treatment with IgG and infection of parasite in G3. In this treatment, the minimum number was five female only in the gut. The results described IgG response was better than the curcuma single use in reducing the female parasite in the gut. This positive response of IgG compared to other controls group was associated with the function of colostrum antibodies.

Keywords: parasites, livestock, curcuma, colostrums

Procedia PDF Downloads 161
2050 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 33
2049 Improved Technology Portfolio Management via Sustainability Analysis

Authors: Ali Al-Shehri, Abdulaziz Al-Qasim, Abdulkarim Sofi, Ali Yousef

Abstract:

The oil and gas industry has played a major role in improving the prosperity of mankind and driving the world economy. According to the International Energy Agency (IEA) and Integrated Environmental Assessment (EIA) estimates, the world will continue to rely heavily on hydrocarbons for decades to come. This growing energy demand mandates taking sustainability measures to prolong the availability of reliable and affordable energy sources, and ensure lowering its environmental impact. Unlike any other industry, the oil and gas upstream operations are energy-intensive and scattered over large zonal areas. These challenging conditions require unique sustainability solutions. In recent years there has been a concerted effort by the oil and gas industry to develop and deploy innovative technologies to: maximize efficiency, reduce carbon footprint, reduce CO2 emissions, and optimize resources and material consumption. In the past, the main driver for research and development (R&D) in the exploration and production sector was primarily driven by maximizing profit through higher hydrocarbon recovery and new discoveries. Environmental-friendly and sustainable technologies are increasingly being deployed to balance sustainability and profitability. Analyzing technology and its sustainability impact is increasingly being used in corporate decision-making for improved portfolio management and allocating valuable resources toward technology R&D.This paper articulates and discusses a novel workflow to identify strategic sustainable technologies for improved portfolio management by addressing existing and future upstream challenges. It uses a systematic approach that relies on sustainability key performance indicators (KPI’s) including energy efficiency quotient, carbon footprint, and CO2 emissions. The paper provides examples of various technologies including CCS, reducing water cuts, automation, using renewables, energy efficiency, etc. The use of 4IR technologies such as Artificial Intelligence, Machine Learning, and Data Analytics are also discussed. Overlapping technologies, areas of collaboration and synergistic relationships are identified. The unique sustainability analyses provide improved decision-making on technology portfolio management.

Keywords: sustainability, oil& gas, technology portfolio, key performance indicator

Procedia PDF Downloads 177
2048 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 261
2047 The National Socialist and Communist Propaganda Activities in the Turkish Press during the World War II

Authors: Asuman Tezcan Mirer

Abstract:

This proposed paper discusses nationalist socialist and communist propaganda struggles in the Turkish press during World War II. The paper aspires to analyze how government agencies directed and organized the Turkish press to prevent the "5th column" from influencing public opinion. During the Second World War, one of the most emphasized issues was propaganda and how Turkish citizens would be protected from the effects of disinformation. Istanbul became a significant headquarters for belligerent countries' intelligence services, and these services were involved in gathering intelligence and disseminating propaganda. The main motive of national socialist propaganda was "anti-communism" in Turkey. Subsidizing certain magazines, controlling German companies' advertisements and paper trade, spreading rumors, printing propaganda brochures, and showing German propaganda films are some tactics that the nationalist socialists applied before and during the Second World War. On the other hand, the communists targeted Turkish racist/ultra-nationalist groups and their publications, which were influenced by the Nazi regime. They were also involved in distributing Marxist publications, printing brochures, and broadcasting radio programs. This study composes of three parts. The first part describes the nationalist socialist and communist propaganda activities in Turkey during the Second World War. The second part addresses the debates over propaganda among selected newspapers representing different ideologies. Finally, the last part analyzes the Turkish government's press policy. It explains why the government allowed ideological debates in the press despite its authoritarian press policy and "active neutrality" stance in the international arena.

Keywords: propaganda, press, 5th column, World War II, Turkey

Procedia PDF Downloads 92
2046 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 76
2045 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing

Procedia PDF Downloads 171