Search results for: artificial neural network approach
16702 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 13916701 Design and Implementation of 2D Mesh Network on Chip Using VHDL
Authors: Boudjedra Abderrahim, Toumi Salah, Boutalbi Mostefa, Frihi Mohammed
Abstract:
Nowadays, using the advancement of technology in semiconductor device fabrication, many transistors can be integrated to a single chip (VLSI). Although the growth chip density potentially eases systems-on-chip (SoCs) integrating thousands of processing element (PE) such as memory, processor, interfaces cores, system complexity, high-performance interconnect and scalable on-chip communication architecture become most challenges for many digital and embedded system designers. Networks-on-chip (NoCs) becomes a new paradigm that makes possible integrating heterogeneous devices and allows many communication constraints and performances. In this paper, we are interested for good performance and low area for implementation and a behavioral modeling of network on chip mesh topology design using VHDL hardware description language with performance evaluation and FPGA implementation results.Keywords: design, implementation, communication system, network on chip, VHDL
Procedia PDF Downloads 37916700 Presenting Internals of Networks Using Bare Machine Technology
Authors: Joel Weymouth, Ramesh K. Karne, Alexander L. Wijesinha
Abstract:
Bare Machine Internet is part of the Bare Machine Computing (BMC) paradigm. It is used in programming application ns to run directly on a device. It is software that runs directly against the hardware using CPU, Memory, and I/O. The software application runs without an Operating System and resident mass storage. An important part of the BMC paradigm is the Bare Machine Internet. It utilizes an Application Development model software that interfaces directly with the hardware on a network server and file server. Because it is “bare,” it is a powerful teaching and research tool that can readily display the internals of the network protocols, software, and hardware of the applications running on the Bare Server. It was also demonstrated that the bare server was accessible by laptop and by smartphone/android. The purpose was to show the further practicality of Bare Internet in Computer Engineering and Computer Science Education and Research. It was also to show that an undergraduate student could take advantage of a bare server with any device and any browser at any release version connected to the internet. This paper presents the Bare Web Server as an educational tool. We will discuss possible applications of this paradigm.Keywords: bare machine computing, online research, network technology, visualizing network internals
Procedia PDF Downloads 17216699 Groundwater Monitoring Using a Community: Science Approach
Authors: Shobha Kumari Yadav, Yubaraj Satyal, Ajaya Dixit
Abstract:
In addressing groundwater depletion, it is important to develop evidence base so to be used in assessing the state of its degradation. Groundwater data is limited compared to meteorological data, which impedes the groundwater use and management plan. Monitoring of groundwater levels provides information base to assess the condition of aquifers, their responses to water extraction, land-use change, and climatic variability. It is important to maintain a network of spatially distributed, long-term monitoring wells to support groundwater management plan. Monitoring involving local community is a cost effective approach that generates real time data to effectively manage groundwater use. This paper presents the relationship between rainfall and spring flow, which are the main source of freshwater for drinking, household consumptions and agriculture in hills of Nepal. The supply and withdrawal of water from springs depends upon local hydrology and the meteorological characteristics- such as rainfall, evapotranspiration and interflow. The study offers evidence of the use of scientific method and community based initiative for managing groundwater and springshed. The approach presents a method to replicate similar initiative in other parts of the country for maintaining integrity of springs.Keywords: citizen science, groundwater, water resource management, Nepal
Procedia PDF Downloads 20216698 Staying When Everybody Else Is Leaving: Coping with High Out-Migration in Rural Areas of Serbia
Authors: Anne Allmrodt
Abstract:
Regions of South-East Europe are characterised by high out-migration for decades. The reasons for leaving range from the hope of a better work situation to a better health care system and beyond. In Serbia, this high out-migration hits the rural areas in particular so that the population number is in the red repeatedly. It might not be hard to guess that this negative population growth has the potential to create different challenges for those who stay in rural areas. So how are they coping with the – statistically proven – high out-migration? Having this in mind, the study is investigating the people‘s individual awareness of the social phenomenon high out-migration and their daily life strategies in rural areas. Furthermore, the study seeks to find out the people’s resilient skills in that context. Is the condition of high out-migration conducive for resilience? The methodology combines a quantitative and a qualitative approach (mixed methods). For the quantitative part, a standardised questionnaire has been developed, including a multiple choice section and a choice experiment. The questionnaire was handed out to people living in rural areas of Serbia only (n = 100). The sheet included questions about people’s awareness of high out-migration, their own daily life strategies or challenges and their social network situation (data about the social network was necessary here since it is supposed to be an influencing variable for resilience). Furthermore, test persons were asked to make different choices of coping with high out-migration in a self-designed choice experiment. Additionally, the study included qualitative interviews asking citizens from rural areas of Serbia. The topics asked during the interview focused on their awareness of high out-migration, their daily life strategies, and challenges as well as their social network situation. Results have shown the following major findings. The awareness of high out-migration is not the same with all test persons. Some declare it as something positive for their own life, others as negative or not effecting at all. The way of coping generally depended – maybe not surprising – on the people’s social network. However – and this might be the most important finding - not everybody with a certain number of contacts had better coping strategies and was, therefore, more resilient. Here the results show that especially people with high affiliation and proximity inside their network were able to cope better and shew higher resilience skills. The study took one step forward in terms of knowledge about societal resilience as well as coping strategies of societies in rural areas. It has shown part of the other side of nowadays migration‘s coin and gives a hint for a more sustainable rural development and community empowerment.Keywords: coping, out-migration, resilience, rural development, social networks, south-east Europe
Procedia PDF Downloads 12916697 User-Perceived Quality Factors for Certification Model of Web-Based System
Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh
Abstract:
One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.Keywords: software certification model, user centric approach, software quality factors, metrics and measurements, web-based system
Procedia PDF Downloads 40516696 Drought Risk Analysis Using Neural Networks for Agri-Businesses and Projects in Lejweleputswa District Municipality, South Africa
Authors: Bernard Moeketsi Hlalele
Abstract:
Drought is a complicated natural phenomenon that creates significant economic, social, and environmental problems. An analysis of paleoclimatic data indicates that severe and extended droughts are inevitable part of natural climatic circle. This study characterised drought in Lejweleputswa using both Standardised Precipitation Index (SPI) and neural networks (NN) to quantify and predict respectively. Monthly 37-year long time series precipitation data were obtained from online NASA database. Prior to the final analysis, this dataset was checked for outliers using SPSS. Outliers were removed and replaced by Expectation Maximum algorithm from SPSS. This was followed by both homogeneity and stationarity tests to ensure non-spurious results. A non-parametric Mann Kendall's test was used to detect monotonic trends present in the dataset. Two temporal scales SPI-3 and SPI-12 corresponding to agricultural and hydrological drought events showed statistically decreasing trends with p-value = 0.0006 and 4.9 x 10⁻⁷, respectively. The study area has been plagued with severe drought events on SPI-3, while on SPI-12, it showed approximately a 20-year circle. The concluded the analyses with a seasonal analysis that showed no significant trend patterns, and as such NN was used to predict possible SPI-3 for the last season of 2018/2019 and four seasons for 2020. The predicted drought intensities ranged from mild to extreme drought events to come. It is therefore recommended that farmers, agri-business owners, and other relevant stakeholders' resort to drought resistant crops as means of adaption.Keywords: drought, risk, neural networks, agri-businesses, project, Lejweleputswa
Procedia PDF Downloads 12616695 Taguchi Method for Analyzing a Flexible Integrated Logistics Network
Authors: E. Behmanesh, J. Pannek
Abstract:
Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method
Procedia PDF Downloads 18716694 Agent Based Location Management Protocol for Mobile Adhoc Networks
Authors: Mallikarjun B. Channappagoudar, Pallapa Venkataram
Abstract:
The dynamic nature of Mobile adhoc network (MANET) due to mobility and disconnection of mobile nodes, leads to various problems in predicting the movement of nodes and their location information updation, for efficient interaction among the application specific nodes. Location management is one of the main challenges to be considered for an efficient service provision to the applications of a MANET. In this paper, we propose a location management protocol, for locating the nodes of a MANET and to maintain uninterrupted high-quality service for distributed applications by intelligently anticipating the change of location of its nodes. The protocol predicts the node movement and application resource scarcity, does the replacement with the chosen nodes nearby which have less mobility and rich in resources, with the help of both static and mobile agents, and maintains the application continuity by providing required network resources. The protocol has been simulated using Java Agent Development Environment (JADE) Framework for agent generation, migration and communication. It consumes much less time (response time), gives better location accuracy, utilize less network resources, and reduce location management overhead.Keywords: mobile agent, location management, distributed applications, mobile adhoc network
Procedia PDF Downloads 39416693 Nafion Nanofiber Composite Membrane Fabrication for Fuel Cell Applications
Authors: C. N. Okafor, M. Maaza, T. A. E. Mokrani
Abstract:
A proton exchange membrane has been developed for Direct Methanol Fuel Cell (DMFC). The nanofiber network composite membranes were prepared by interconnected network of Nafion (perfuorosulfonic acid) nanofibers that have been embedded in an uncharged and inert polymer matrix, by electro-spinning. The spinning solution of Nafion with a low concentration (1 wt. % compared to Nafion) of high molecular weight poly(ethylene oxide), as a carrier polymer. The interconnected network of Nafion nanofibers with average fiber diameter in the range of 160-700nm, were used to make the membranes, with the nanofiber occupying up to 85% of the membrane volume. The matrix polymer was cross-linked with Norland Optical Adhesive 63 under UV. The resulting membranes showed proton conductivity of 0.10 S/cm at 25°C and 80% RH; and methanol permeability of 3.6 x 10-6 cm2/s.Keywords: composite membrane, electrospinning, fuel cell, nanofibers
Procedia PDF Downloads 26616692 F-VarNet: Fast Variational Network for MRI Reconstruction
Authors: Omer Cahana, Maya Herman, Ofer Levi
Abstract:
Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.Keywords: MRI, deep learning, variational network, computer vision, compress sensing
Procedia PDF Downloads 16216691 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images
Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam
Abstract:
The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy
Procedia PDF Downloads 7916690 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations
Authors: Xiao Zhou, Jianlin Cheng
Abstract:
A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining
Procedia PDF Downloads 46916689 Integrated Waste-to-Energy Approach: An Overview
Authors: Tsietsi J. Pilusa, Tumisang G. Seodigeng
Abstract:
This study evaluates the benefits of advanced waste management practices in unlocking waste-to-energy opportunities within the solid waste industry. The key drivers of sustainable waste management practices, specifically with respect to packaging waste-to-energy technology options are discussed. The success of a waste-to-energy system depends significantly on the appropriateness of available technologies, including those that are well established as well as those that are less so. There are hard and soft interventions to be considered when packaging an integrated waste treatment solution. Technology compatibility with variation in feedstock (waste) quality and quantities remains a key factor. These factors influence the technology reliability in terms of production efficiencies and product consistency, which in turn, drives the supply and demand network. Waste treatment technologies rely on the waste material as feedstock; the feedstock varies in quality and quantities depending on several factors; hence, the technology fails, as a result. It is critical to design an advanced waste treatment technology in an integrated approach to minimize the possibility of technology failure due to unpredictable feedstock quality, quantities, conversion efficiencies, and inconsistent product yield or quality. An integrated waste-to-energy approach offers a secure system design that considers sustainable waste management practices.Keywords: emerging markets, evaluation tool, interventions, waste treatment technologies
Procedia PDF Downloads 27316688 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 27216687 Fault Tree Analysis and Bayesian Network for Fire and Explosion of Crude Oil Tanks: Case Study
Authors: B. Zerouali, M. Kara, B. Hamaidi, H. Mahdjoub, S. Rouabhia
Abstract:
In this paper, a safety analysis for crude oil tanks to prevent undesirable events that may cause catastrophic accidents. The estimation of the probability of damage to industrial systems is carried out through a series of steps, and in accordance with a specific methodology. In this context, this work involves developing an assessment tool and risk analysis at the level of crude oil tanks system, based primarily on identification of various potential causes of crude oil tanks fire and explosion by the use of Fault Tree Analysis (FTA), then improved risk modelling by Bayesian Networks (BNs). Bayesian approach in the evaluation of failure and quantification of risks is a dynamic analysis approach. For this reason, have been selected as an analytical tool in this study. Research concludes that the Bayesian networks have a distinct and effective method in the safety analysis because of the flexibility of its structure; it is suitable for a wide variety of accident scenarios.Keywords: bayesian networks, crude oil tank, fault tree, prediction, safety
Procedia PDF Downloads 66016686 Local Boundary Analysis for Generative Theory of Tonal Music: From the Aspect of Classic Music Melody Analysis
Authors: Po-Chun Wang, Yan-Ru Lai, Sophia I. C. Lin, Alvin W. Y. Su
Abstract:
The Generative Theory of Tonal Music (GTTM) provides systematic approaches to recognizing local boundaries of music. The rules have been implemented in some automated melody segmentation algorithms. Besides, there are also deep learning methods with GTTM features applied to boundary detection tasks. However, these studies might face constraints such as a lack of or inconsistent label data. The GTTM database is currently the most widely used GTTM database, which includes manually labeled GTTM rules and local boundaries. Even so, we found some problems with these labels. They are sometimes discrepancies with GTTM rules. In addition, since it is labeled at different times by multiple musicians, they are not within the same scope in some cases. Therefore, in this paper, we examine this database with musicians from the aspect of classical music and relabel the scores. The relabeled database - GTTM Database v2.0 - will be released for academic research usage. Despite the experimental and statistical results showing that the relabeled database is more consistent, the improvement in boundary detection is not substantial. It seems that we need more clues than GTTM rules for boundary detection in the future.Keywords: dataset, GTTM, local boundary, neural network
Procedia PDF Downloads 14616685 A Time-Reducible Approach to Compute Determinant |I-X|
Authors: Wang Xingbo
Abstract:
Computation of determinant in the form |I-X| is primary and fundamental because it can help to compute many other determinants. This article puts forward a time-reducible approach to compute determinant |I-X|. The approach is derived from the Newton’s identity and its time complexity is no more than that to compute the eigenvalues of the square matrix X. Mathematical deductions and numerical example are presented in detail for the approach. By comparison with classical approaches the new approach is proved to be superior to the classical ones and it can naturally reduce the computational time with the improvement of efficiency to compute eigenvalues of the square matrix.Keywords: algorithm, determinant, computation, eigenvalue, time complexity
Procedia PDF Downloads 41516684 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought
Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan
Abstract:
Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin
Procedia PDF Downloads 6316683 On the Network Packet Loss Tolerance of SVM Based Activity Recognition
Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir
Abstract:
In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss
Procedia PDF Downloads 47516682 Distribution Network Optimization by Optimal Placement of Photovoltaic-Based Distributed Generation: A Case Study of the Nigerian Power System
Authors: Edafe Lucky Okotie, Emmanuel Osawaru Omosigho
Abstract:
This paper examines the impacts of the introduction of distributed energy generation (DEG) technology into the Nigerian power system as an alternative means of energy generation at distribution ends using Otovwodo 15 MVA, 33/11kV injection substation as a case study. The overall idea is to increase the generated energy in the system, improve the voltage profile and reduce system losses. A photovoltaic-based distributed energy generator (PV-DEG) was considered and was optimally placed in the network using Genetic Algorithm (GA) in Mat. Lab/Simulink environment. The results of simulation obtained shows that the dynamic performance of the network was optimized with DEG-grid integration.Keywords: distributed energy generation (DEG), genetic algorithm (GA), power quality, total load demand, voltage profile
Procedia PDF Downloads 8416681 The Effects of Circadian Rhythms Change in High Latitudes
Authors: Ekaterina Zvorykina
Abstract:
Nowadays, Arctic and Antarctic regions are distinguished to be one of the most important strategic resources for global development. Nonetheless, living conditions in Arctic regions still demand certain improvements. As soon as the region is rarely populated, one of the main points of interest is health accommodation of the people, who migrate to Arctic region for permanent and shift work. At Arctic and Antarctic latitudes, personnel face polar day and polar night conditions during the time of the year. It means that they are deprived of natural sunlight in winter season and have continuous daylight in summer. Firstly, the change in light intensity during 24-hours period due to migration affects circadian rhythms. Moreover, the controlled artificial light in winter is also an issue. The results of the recent studies on night shift medical professionals, who were exposed to permanent artificial light, have already demonstrated higher risks in cancer, depression, Alzheimer disease. Moreover, people exposed to frequent time zones change are also subjected to higher risks of heart attack and cancer. Thus, our main goals are to understand how high latitude work and living conditions can affect human health and how it can be prevented. In our study, we analyze molecular and cellular factors, which play important role in circadian rhythm change and distinguish main risk groups in people, migrating to high latitudes. The main well-studied index of circadian timing is melatonin or its metabolite 6-sulfatoxymelatonin. In low light intensity melatonin synthesis is disturbed and as a result human organism requires more time for sleep, which is still disregarded when it comes to working time organization. Lack of melatonin also causes shortage in serotonin production, which leads to higher depression risk. Melatonin is also known to inhibit oncogenes and increase apoptosis level in cells, the main factors for tumor growth, as well as circadian clock genes (for example Per2). Thus, people who work in high latitudes can be distinguished as a risk group for cancer diseases and demand more attention. Clock/Clock genes, known to be one of the main circadian clock regulators, decrease sensitivity of hypothalamus to estrogen and decrease glucose sensibility, which leads to premature aging and oestrous cycle disruption. Permanent light exposure also leads to accumulation superoxide dismutase and oxidative stress, which is one of the main factors for early dementia and Alzheimer disease. We propose a new screening system adjusted for people, migrating from middle to high latitudes and accommodation therapy. Screening is focused on melatonin and estrogen levels, sleep deprivation and neural disorders, depression level, cancer risks and heart and vascular disorders. Accommodation therapy includes different types artificial light exposure, additional melatonin and neuroprotectors. Preventive procedures can lead to increase of migration intensity to high latitudes and, as a result, the prosperity of Arctic region.Keywords: circadian rhythm, high latitudes, melatonin, neuroprotectors
Procedia PDF Downloads 15616680 Modeling of Surface Roughness in Hard Turning of DIN 1.2210 Cold Work Tool Steel with Ceramic Tools
Authors: Mehmet Erdi Korkmaz, Mustafa Günay
Abstract:
Nowadays, grinding is frequently replaced with hard turning for reducing set up time and higher accuracy. This paper focused on mathematical modeling of average surface roughness (Ra) in hard turning of AISI L2 grade (DIN 1.2210) cold work tool steel with ceramic tools. The steel was hardened to 60±1 HRC after the heat treatment process. Cutting speed, feed rate, depth of cut and tool nose radius was chosen as the cutting conditions. The uncoated ceramic cutting tools were used in the machining experiments. The machining experiments were performed according to Taguchi L27 orthogonal array on CNC lathe. Ra values were calculated by averaging three roughness values obtained from three different points of machined surface. The influences of cutting conditions on surface roughness were evaluated as statistical and experimental. The analysis of variance (ANOVA) with 95% confidence level was applied for statistical analysis of experimental results. Finally, mathematical models were developed using the artificial neural networks (ANN). ANOVA results show that feed rate is the dominant factor affecting surface roughness, followed by tool nose radius and cutting speed.Keywords: ANN, hard turning, DIN 1.2210, surface roughness, Taguchi method
Procedia PDF Downloads 37116679 Evolution under Length Constraints for Convolutional Neural Networks Architecture Design
Authors: Ousmane Youme, Jean Marie Dembele, Eugene Ezin, Christophe Cambier
Abstract:
In recent years, the convolutional neural networks (CNN) architectures designed by evolution algorithms have proven to be competitive with handcrafted architectures designed by experts. However, these algorithms need a lot of computational power, which is beyond the capabilities of most researchers and engineers. To overcome this problem, we propose an evolution architecture under length constraints. It consists of two algorithms: a search length strategy to find an optimal space and a search architecture strategy based on a genetic algorithm to find the best individual in the optimal space. Our algorithms drastically reduce resource costs and also keep good performance. On the Cifar-10 dataset, our framework presents outstanding performance with an error rate of 5.12% and only 4.6 GPU a day to converge to the optimal individual -22 GPU a day less than the lowest cost automatic evolutionary algorithm in the peer competition.Keywords: CNN architecture, genetic algorithm, evolution algorithm, length constraints
Procedia PDF Downloads 12816678 Dual Active Bridge Converter with Photovoltaic Arrays for DC Microgrids: Design and Analysis
Authors: Ahmed Atef, Mohamed Alhasheem, Eman Beshr
Abstract:
In this paper, an enhanced DC microgrid design is proposed using the DAB converter as a conversion unit in order to harvest the maximum power from the PV array. Each connected DAB converter is controlled with an enhanced control strategy. The controller is based on the artificial intelligence (AI) technique to regulate the terminal PV voltage through the phase shift angle of each DAB converter. In this manner, no need for a Maximum Power Point Tracking (MPPT) unit to set the reference of the PV terminal voltage. This strategy overcomes the stability issues of the DC microgrid as the response of converters is superior compared to the conventional strategies. The proposed PV interface system is modelled and simulated using MATLAB/SIMULINK. The simulation results reveal an accurate and fast response of the proposed design in case of irradiance changes.Keywords: DC microgrid, DAB converter, parallel operation, artificial intelligence, fast response
Procedia PDF Downloads 79016677 An Integer Nonlinear Program Proposal for Intermodal Transportation Service Network Design
Authors: Laaziz El Hassan
Abstract:
The Service Network Design Problem (SNDP) is a tactical issue in freight transportation firms. The existing formulations of the problem for intermodal rail-road transportation were not always adapted to the intermodality in terms of full asset utilization and modal shift reinforcement. The objective of the article is to propose a model having a more compliant formulation with intermodality, including constraints highlighting the imperatives of asset management, reinforcing modal shift from road to rail and reducing, by the way, road mode CO2 emissions. The model is a fixed charged, path based integer nonlinear program. Its objective is to minimize services total cost while ensuring full assets utilization to satisfy freight demand forecast. The model's main feature is that it gives as output both the train sizes and the services frequencies for a planning period. We solved the program using a commercial solver and discussed the numerical results.Keywords: intermodal transport network, service network design, model, nonlinear integer program, path-based, service frequencies, modal shift
Procedia PDF Downloads 11816676 Structural Correlates of Reduced Malicious Pleasure in Huntington's Disease
Authors: Sandra Baez, Mariana Pino, Mildred Berrio, Hernando Santamaria-Garcia, Lucas Sedeno, Adolfo Garcia, Sol Fittipaldi, Agustin Ibanez
Abstract:
Schadenfreude refers to the perceiver’s experience of pleasure at another’s misfortune. This is a multidetermined emotion which can be evoked by hostile feelings and envy. The experience of Schadenfreude engages mechanisms implicated in diverse social cognitive processes. For instance, Schadenfreude involves heightened reward processing, accompanied by increased striatal engagement and it interacts with mentalizing and perspective-taking abilities. Patients with Huntington's disease (HD) exhibit reductions of Schadenfreude experience, suggesting a role of striatal degeneration in such an impairment. However, no study has directly assessed the relationship between regional brain atrophy in HD and reduced Schadenfreude. This study investigated whether gray matter (GM) atrophy in HD patients correlates with ratings of Schadenfreude. First, we compared the performance of 20 HD patients and 23 controls on an experimental task designed to trigger Schadenfreude and envy (another social emotion acting as a control condition). Second, we compared GM volume between groups. Third, we examined brain regions where atrophy might be associated with specific impairments in the patients. Results showed that while both groups showed similar ratings of envy, HD patients reported lower Schadenfreude. The latter pattern was related to atrophy in regions of the reward system (ventral striatum) and the mentalizing network (precuneus and superior parietal lobule). Our results shed light on the intertwining of reward and socioemotional processes in Schadenfreude, while offering novel evidence about their neural correlates. In addition, our results open the door to future studies investigating social emotion processing in other clinical populations characterized by striatal or mentalizing network impairments (e.g., Parkinson’s disease, schizophrenia, autism spectrum disorders).Keywords: envy, Gray matter atrophy, Huntigton's disease, Schadenfreude, social emotions
Procedia PDF Downloads 33616675 Intellectual Property in Digital Environment
Authors: Balamurugan L.
Abstract:
Artificial intelligence (AI) and its applications in Intellectual Property Rights (IPR) has been significantly growing in recent years. In last couple of years, AI tools for Patent Research and Patent Analytics have been well-stabilized in terms of accuracy of references and representation of identified patent insights. However, AI tools for Patent Prosecution and Patent Litigation are still in the nascent stage and there may be a significant potential if such market is explored further. Our research is primarily focused on identifying potential whitespaces and schematic algorithms to automate the Patent Prosecution and Patent Litigation Process of the Intellectual Property. The schematic algorithms may assist leading AI tool developers, to explore such opportunities in the field of Intellectual Property. Our research is also focused on identification of pitfalls of the AI. For example, Information Security and its impact in IPR, and Potential remediations to sustain the IPR in the digital environment.Keywords: artificial intelligence, patent analytics, patent drafting, patent litigation, patent prosecution, patent research
Procedia PDF Downloads 6716674 The Searching Artificial Intelligence: Neural Evidence on Consumers' Less Aversion to Algorithm-Recommended Search Product
Authors: Zhaohan Xie, Yining Yu, Mingliang Chen
Abstract:
As research has shown a convergent tendency for aversion to AI recommendation, it is imperative to find a way to promote AI usage and better harness the technology. In the context of e-commerce, this study has found evidence that people show less avoidance of algorithms when recommending search products compared to experience products. This is due to people’s different attribution of mind to AI versus humans, as suggested by mind perception theory. While people hold the belief that an algorithm owns sufficient capability to think and calculate, which makes it competent to evaluate search product attributes that can be obtained before actual use, they doubt its capability to sense and feel, which is essential for evaluating experience product attributes that must be assessed after experience in person. The result of the behavioral investigation (Study 1, N=112) validated that consumers show low purchase intention to experience products recommended by AI. Further consumer neuroscience study (Study 2, N=26) using Event-related potential (ERP) showed that consumers have a higher level of cognitive conflict when faced with AI recommended experience product as reflected by larger N2 component, while the effect disappears for search product. This research has implications for the effective employment of AI recommenders, and it extends the literature on e-commerce and marketing communication.Keywords: algorithm recommendation, consumer behavior, e-commerce, event-related potential, experience product, search product
Procedia PDF Downloads 15416673 Enhancing Communicative Skills for Students in Automatics
Authors: Adrian Florin Busu
Abstract:
The communicative approach, or communicative language teaching, used for enhancing communicative skills in students in automatics is a modern teaching approach based on the concept of learning a language through having to communicate real meaning. In the communicative approach, real communication is both the objective of learning and the means through which it takes place. This approach was initiated during the 1970’s and quickly became prominent, as it proposed an alternative to the previous systems-oriented approaches. In other words, instead of focusing on the acquisition of grammar and vocabulary, the communicative approach aims at developing students’ competence to communicate in the target language with an enhanced focus on real-life situations. To put it in an nutshell, CLT considers using the language to be just as important as actually learning the language.Keywords: communication, approach, objective, learning
Procedia PDF Downloads 161