Search results for: optimization algorithms
3202 A Graph Library Development Based on the Service-Oriented Architecture: Used for Representation of the Biological Systems in the Computer Algorithms
Authors: Mehrshad Khosraviani, Sepehr Najjarpour
Abstract:
Considering the usage of graph-based approaches in systems and synthetic biology, and the various types of the graphs employed by them, a comprehensive graph library based on the three-tier architecture (3TA) was previously introduced for full representation of the biological systems. Although proposing a 3TA-based graph library, three following reasons motivated us to redesign the graph library based on the service-oriented architecture (SOA): (1) Maintaining the accuracy of the data related to an input graph (including its edges, its vertices, its topology, etc.) without involving the end user: Since, in the case of using 3TA, the library files are available to the end users, they may be utilized incorrectly, and consequently, the invalid graph data will be provided to the computer algorithms. However, considering the usage of the SOA, the operation of the graph registration is specified as a service by encapsulation of the library files. In other words, overall control operations needed for registration of the valid data will be the responsibility of the services. (2) Partitioning of the library product into some different parts: Considering 3TA, a whole library product was provided in general. While here, the product can be divided into smaller ones, such as an AND/OR graph drawing service, and each one can be provided individually. As a result, the end user will be able to select any parts of the library product, instead of all features, to add it to a project. (3) Reduction of the complexities: While using 3TA, several other libraries must be needed to add for connecting to the database, responsibility of the provision of the needed library resources in the SOA-based graph library is entrusted with the services by themselves. Therefore, the end user who wants to use the graph library is not involved with its complexity. In the end, in order to make the library easier to control in the system, and to restrict the end user from accessing the files, it was preferred to use the service-oriented architecture (SOA) over the three-tier architecture (3TA) and to redevelop the previously proposed graph library based on it.Keywords: Bio-Design Automation, Biological System, Graph Library, Service-Oriented Architecture, Systems and Synthetic Biology
Procedia PDF Downloads 3153201 Autonomic Sonar Sensor Fault Manager for Mobile Robots
Authors: Martin Doran, Roy Sterritt, George Wilkie
Abstract:
NASA, ESA, and NSSC space agencies have plans to put planetary rovers on Mars in 2020. For these future planetary rovers to succeed, they will heavily depend on sensors to detect obstacles. This will also become of vital importance in the future, if rovers become less dependent on commands received from earth-based control and more dependent on self-configuration and self-decision making. These planetary rovers will face harsh environments and the possibility of hardware failure is high, as seen in missions from the past. In this paper, we focus on using Autonomic principles where self-healing, self-optimization, and self-adaption are explored using the MAPE-K model and expanding this model to encapsulate the attributes such as Awareness, Analysis, and Adjustment (AAA-3). In the experimentation, a Pioneer P3-DX research robot is used to simulate a planetary rover. The sonar sensors on the P3-DX robot are used to simulate the sensors on a planetary rover (even though in reality, sonar sensors cannot operate in a vacuum). Experiments using the P3-DX robot focus on how our software system can be adapted with the loss of sonar sensor functionality. The autonomic manager system is responsible for the decision making on how to make use of remaining ‘enabled’ sonars sensors to compensate for those sonar sensors that are ‘disabled’. The key to this research is that the robot can still detect objects even with reduced sonar sensor capability.Keywords: autonomic, self-adaption, self-healing, self-optimization
Procedia PDF Downloads 3533200 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon
Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison
Abstract:
Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.Keywords: asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax
Procedia PDF Downloads 4193199 Wind Power Forecasting Using Echo State Networks Optimized by Big Bang-Big Crunch Algorithm
Authors: Amir Hossein Hejazi, Nima Amjady
Abstract:
In recent years, due to environmental issues traditional energy sources had been replaced by renewable ones. Wind energy as the fastest growing renewable energy shares a considerable percent of energy in power electricity markets. With this fast growth of wind energy worldwide, owners and operators of wind farms, transmission system operators, and energy traders need reliable and secure forecasts of wind energy production. In this paper, a new forecasting strategy is proposed for short-term wind power prediction based on Echo State Networks (ESN). The forecast engine utilizes state-of-the-art training process including dynamical reservoir with high capability to learn complex dynamics of wind power or wind vector signals. The study becomes more interesting by incorporating prediction of wind direction into forecast strategy. The Big Bang-Big Crunch (BB-BC) evolutionary optimization algorithm is adopted for adjusting free parameters of ESN-based forecaster. The proposed method is tested by real-world hourly data to show the efficiency of the forecasting engine for prediction of both wind vector and wind power output of aggregated wind power production.Keywords: wind power forecasting, echo state network, big bang-big crunch, evolutionary optimization algorithm
Procedia PDF Downloads 5773198 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 1123197 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning
Authors: Saahith M. S., Sivakami R.
Abstract:
In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis
Procedia PDF Downloads 413196 Parametric Influence and Optimization of Wire-EDM on Oil Hardened Non-Shrinking Steel
Authors: Nixon Kuruvila, H. V. Ravindra
Abstract:
Wire-cut Electro Discharge Machining (WEDM) is a special form of conventional EDM process in which electrode is a continuously moving conductive wire. The present study aims at determining parametric influence and optimum process parameters of Wire-EDM using Taguchi’s Technique and Genetic algorithm. The variation of the performance parameters with machining parameters was mathematically modeled by Regression analysis method. The objective functions are Dimensional Accuracy (DA) and Material Removal Rate (MRR). Experiments were designed as per Taguchi’s L16 Orthogonal Array (OA) where in Pulse-on duration, Pulse-off duration, Current, Bed-speed and Flushing rate have been considered as the important input parameters. The matrix experiments were conducted for the material Oil Hardened Non Shrinking Steel (OHNS) having the thickness of 40 mm. The results of the study reveals that among the machining parameters it is preferable to go in for lower pulse-off duration for achieving over all good performance. Regarding MRR, OHNS is to be eroded with medium pulse-off duration and higher flush rate. Finally, the validation exercise performed with the optimum levels of the process parameters. The results confirm the efficiency of the approach employed for optimization of process parameters in this study.Keywords: dimensional accuracy (DA), regression analysis (RA), Taguchi method (TM), volumetric material removal rate (VMRR)
Procedia PDF Downloads 4143195 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 743194 Modeling and Minimizing the Effects of Ferroresonance for Medium Voltage Transformers
Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Arian Amirnia, Atena Taheri, Mohammadreza Arabi, Mahmud Fotuhi-Firuzabad
Abstract:
Ferroresonance effects cause overvoltage in medium voltage transformers and isolators used in electrical networks. Ferroresonance effects are nonlinear and occur between the network capacitor and the nonlinear inductance of the voltage transformer during saturation. This phenomenon is unwanted for transformers since it causes overheating, introduction of high dynamic forces in primary coils, and rise of voltage in primary coils for the voltage transformer. Furthermore, it results in electrical and thermal failure of the transformer. Expansion of distribution lines, design of the transformer in smaller sizes, and the increase of harmonics in distribution networks result in an increase of ferroresonance. There is limited literature available to improve the effects of ferroresonance; therefore, optimizing its effects for voltage transformers is of great importance. In this study, comprehensive modeling of a medium voltage block-type voltage transformer is performed. In addition, a recent model is proposed to improve the performance of voltage transformers during the occurrence of ferroresonance using damping oscillations. Also, transformer design optimization is presented in this study to show further improvements in the performance of the voltage transformer. The recently proposed model is experimentally tested and verified on a medium voltage transformer in the laboratory, and simulation results show a large reduction of the effects of ferroresonance.Keywords: optimization, voltage transformer, ferroresonance, modeling, damper
Procedia PDF Downloads 1053193 Rotorcraft Performance and Environmental Impact Evaluation by Multidisciplinary Modelling
Authors: Pierre-Marie Basset, Gabriel Reboul, Binh DangVu, Sébastien Mercier
Abstract:
Rotorcraft provides invaluable services thanks to their Vertical Take-Off and Landing (VTOL), hover and low speed capabilities. Yet their use is still often limited by their cost and environmental impact, especially noise and energy consumption. One of the main brakes to the expansion of the use of rotorcraft for urban missions is the environmental impact. The first main concern for the population is the noise. In order to develop the transversal competency to assess the rotorcraft environmental footprint, a collaboration has been launched between six research departments within ONERA. The progress in terms of models and methods are capitalized into the numerical workshop C.R.E.A.T.I.O.N. “Concepts of Rotorcraft Enhanced Assessment Through Integrated Optimization Network”. A typical mission for which the environmental impact issue is of great relevance has been defined. The first milestone is to perform the pre-sizing of a reference helicopter for this mission. In a second milestone, an alternate rotorcraft concept has been defined: a tandem rotorcraft with optional propulsion. The key design trends are given for the pre-sizing of this rotorcraft aiming at a significant reduction of the global environmental impact while still giving equivalent flight performance and safety with respect to the reference helicopter. The models and methods have been improved for catching sooner and more globally, the relative variations on the environmental impact when changing the rotorcraft architecture, the pre-design variables and the operation parameters.Keywords: environmental impact, flight performance, helicopter, multi objectives multidisciplinary optimization, rotorcraft
Procedia PDF Downloads 2733192 An Energy-Balanced Clustering Method on Wireless Sensor Networks
Authors: Yu-Ting Tsai, Chiun-Chieh Hsu, Yu-Chun Chu
Abstract:
In recent years, due to the development of wireless network technology, many researchers have devoted to the study of wireless sensor networks. The applications of wireless sensor network mainly use the sensor nodes to collect the required information, and send the information back to the users. Since the sensed area is difficult to reach, there are many restrictions on the design of the sensor nodes, where the most important restriction is the limited energy of sensor nodes. Because of the limited energy, researchers proposed a number of ways to reduce energy consumption and balance the load of sensor nodes in order to increase the network lifetime. In this paper, we proposed the Energy-Balanced Clustering method with Auxiliary Members on Wireless Sensor Networks(EBCAM)based on the cluster routing. The main purpose is to balance the energy consumption on the sensed area and average the distribution of dead nodes in order to avoid excessive energy consumption because of the increasing in transmission distance. In addition, we use the residual energy and average energy consumption of the nodes within the cluster to choose the cluster heads, use the multi hop transmission method to deliver the data, and dynamically adjust the transmission radius according to the load conditions. Finally, we use the auxiliary cluster members to change the delivering path according to the residual energy of the cluster head in order to its load. Finally, we compare the proposed method with the related algorithms via simulated experiments and then analyze the results. It reveals that the proposed method outperforms other algorithms in the numbers of used rounds and the average energy consumption.Keywords: auxiliary nodes, cluster, load balance, routing algorithm, wireless sensor network
Procedia PDF Downloads 2803191 An Explanatory Study Approach Using Artificial Intelligence to Forecast Solar Energy Outcome
Authors: Agada N. Ihuoma, Nagata Yasunori
Abstract:
Artificial intelligence (AI) techniques play a crucial role in predicting the expected energy outcome and its performance, analysis, modeling, and control of renewable energy. Renewable energy is becoming more popular for economic and environmental reasons. In the face of global energy consumption and increased depletion of most fossil fuels, the world is faced with the challenges of meeting the ever-increasing energy demands. Therefore, incorporating artificial intelligence to predict solar radiation outcomes from the intermittent sunlight is crucial to enable a balance between supply and demand of energy on loads, predict the performance and outcome of solar energy, enhance production planning and energy management, and ensure proper sizing of parameters when generating clean energy. However, one of the major problems of forecasting is the algorithms used to control, model, and predict performances of the energy systems, which are complicated and involves large computer power, differential equations, and time series. Also, having unreliable data (poor quality) for solar radiation over a geographical location as well as insufficient long series can be a bottleneck to actualization. To overcome these problems, this study employs the anaconda Navigator (Jupyter Notebook) for machine learning which can combine larger amounts of data with fast, iterative processing and intelligent algorithms allowing the software to learn automatically from patterns or features to predict the performance and outcome of Solar Energy which in turns enables the balance of supply and demand on loads as well as enhance production planning and energy management.Keywords: artificial Intelligence, backward elimination, linear regression, solar energy
Procedia PDF Downloads 1623190 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling
Authors: Masoud Safdari, Jacob Fish
Abstract:
Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.Keywords: atomistic, continuum, coupling, multiscale
Procedia PDF Downloads 1793189 Using Real Truck Tours Feedback for Address Geocoding Correction
Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle
Abstract:
When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.Keywords: driver experience feedback, geocoding correction, real truck tours
Procedia PDF Downloads 6773188 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights
Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy
Abstract:
The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems
Procedia PDF Downloads 793187 Finite Element Analysis of Connecting Rod
Authors: Mohammed Mohsin Ali H., Mohamed Haneef
Abstract:
The connecting rod transmits the piston load to the crank causing the latter to turn, thus converting the reciprocating motion of the piston into a rotary motion of the crankshaft. Connecting rods are subjected to forces generated by mass and fuel combustion. This study investigates and compares the fatigue behavior of forged steel, powder forged and ASTM a 514 steel cold quenched connecting rods. The objective is to suggest for a new material with reduced weight and cost with the increased fatigue life. This has entailed performing a detailed load analysis. Therefore, this study has dealt with two subjects: first, dynamic load and stress analysis of the connecting rod, and second, optimization for material, weight and cost. In the first part of the study, the loads acting on the connecting rod as a function of time were obtained. Based on the observations of the dynamic FEA, static FEA, and the load analysis results, the load for the optimization study was selected. It is the conclusion of this study that the connecting rod can be designed and optimized under a load range comprising tensile load and compressive load. Tensile load corresponds to 360o crank angle at the maximum engine speed. The compressive load is corresponding to the peak gas pressure. Furthermore, the existing connecting rod can be replaced with a new connecting rod made of ASTM a 514 steel cold quenched that is 12% lighter and 28% cheaper.Keywords: connecting rod, ASTM a514 cold quenched material, static analysis, fatigue analysis, stress life approach
Procedia PDF Downloads 3043186 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century
Authors: Stephen L. Roberts
Abstract:
This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.Keywords: algorithms, global health, pandemic, surveillance
Procedia PDF Downloads 1913185 Optimization of Culture Conditions of Paecilomyces tenuipes, Entomopathogenic Fungi Inoculated into the Silkworm Larva, Bombyx mori
Authors: Sunghee Nam
Abstract:
Entomopathogenic fungi is a Cordyceps species that is isolated from dead silkworm and cicada. Fungi on cicadas were described in old Chinese medicinal books and from ancient times, vegetable wasps and plant worms were widely known to have active substance and have been studied for pharmacological use. Among many fungi belonging to the genus Cordyceps, Cordyceps sinensis have been demonstrated to yield natural products possessing various biological activities and many bioactive components. Generally, It is commonly used to replenish the kidney and soothe the lung, and for the treatment of fatigue. Due to their commercial and economic importance, the demand for Cordyceps has been rapidly increased. However, a supply of Cordyceps specimen could not meet the increasing demand because of their sole dependence on field collection and habitat destruction. Because it is difficult to obtain many insect hosts in nature and the edibility of host insect needs to be verified in a pharmacological aspect. Recently, this setback was overcome that P. tenuipes was able to be cultivated in a large scale using silkworm as host. Pharmacological effects of P. tenuipes cultured on silkworm such as strengthening immune function, anti-fatigue, anti-tumor activity and controlling liver etc. have been proved. They are widely commercialized. In this study, we attempted to establish a method for stable growth inhibition of P. tenuipes on silkworm hosts and an optimal condition for synnemata formation. To determine optimum culturing conditions, temperature and light conditions were varied. The length and number of synnemata was highest at 25℃ temperature and 100~300 lux illumination. On an average, the synnemata of wild P. tenuipes measures 70 ㎜ in length and 20 in number; those of the cultured strain were relatively shorter and more in number. The number of synnemata may have increased as a result of inoculating the host with highly concentrated conidia, while the length may have decreased due to limited nutrition per individual. It is not able that changes in light illumination cause morphological variations in the synnemata. However, regulation of only light and temperature could not produce stromata like perithecia, asci, and ascospores.Keywords: optimization of culture conditions of paecilomyces tenuipes, entomopathogenic fungi optimization of culture conditions of paecilomyces tenuipes, entomopathogenic fungi silkworm larva, bombyx mori
Procedia PDF Downloads 2543184 Detecting Geographically Dispersed Overlay Communities Using Community Networks
Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan
Abstract:
Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.Keywords: social networks, community detection, modularity optimization, geographically dispersed communities
Procedia PDF Downloads 2403183 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology
Authors: Joseph C. Chen, Venkata Karthik Jakka
Abstract:
The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.Keywords: injection molding processes, taguchi parameter design, tensile strength, high-density polyethylene(HDPE)
Procedia PDF Downloads 2013182 Condition Optimization for Trypsin and Chymotrypsin Activities in Economic Animals
Authors: Mallika Supa-Aksorn, Buaream Maneewan, Jiraporn Rojtinnakorn
Abstract:
For animals, trypsin and chymotrypsin are the 2 proteases that play the important role in protein digestion and involving in growth rate. In many animals, these two enzymes are indicated as growth parameter by feed. Although enzyme assay at optimal condition is significant for its accuracy activity determination. There is less report of trypsin and chymotrypsin. Therefore, in this study, optimization of pH and temperature for trypsin (T) and chymotrypsin (C) in economic species; i.e. Nile tilapia (Oreochromis niloticus), sand goby (Oxyeleotoris marmoratus), giant freshwater prawn (Macrobachium rosenberchii) and native chicken (Gallus gallus) were investigated. Each enzyme of each species was assaying for its specific activity with variation of pH in range of 2-12 and temperature in range of 30-80 °C. It revealed that, for Nile tilapia, T had optimal condition at pH 9 and temperature 50-80 °C, whereas C had optimal condition at pH 8 and temperature 60 °C. For sand goby, T had optimal condition at pH 7 and temperature of 50 °C, while C had optimal condition at pH 11 and temperature of 70-75 °C. For juvenile freshwater prawn, T had optimal condition at pH 10-11 and temperature of 60-65 °C, C had optimal condition at pH 8 and temperature of 70°C. For starter native chicken, T has optimal condition at pH 7 and temperature of 70 °C, whereas C had o optimal condition at pH 8 and temperature of 60°C. This information of optimal conditions will be high valuable in further for, actual enzyme measurement of T and C activities that benefit for growth and feed analysis.Keywords: trypsin, chymotrypsin, Oreochromis niloticus, Oxyeleotoris marmoratus, Macrobachium rosenberchii, Gallus gallus
Procedia PDF Downloads 2613181 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 1213180 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms
Authors: Ramin Mansouri
Abstract:
Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.Keywords: reservoirs, differential evolution, dam, Optimal operation
Procedia PDF Downloads 813179 Solar Building Design Using GaAs PV Cells for Optimum Energy Consumption
Authors: Hadis Pouyafar, D. Matin Alaghmandan
Abstract:
Gallium arsenide (GaAs) solar cells are widely used in applications like spacecraft and satellites because they have a high absorption coefficient and efficiency and can withstand high-energy particles such as electrons and protons. With the energy crisis, there's a growing need for efficiency and cost-effective solar cells. GaAs cells, with their 46% efficiency compared to silicon cells 23% can be utilized in buildings to achieve nearly zero emissions. This way, we can use irradiation and convert more solar energy into electricity. III V semiconductors used in these cells offer performance compared to other technologies available. However, despite these advantages, Si cells dominate the market due to their prices. In our study, we took an approach by using software from the start to gather all information. By doing so, we aimed to design the optimal building that harnesses the full potential of solar energy. Our modeling results reveal a future; for GaAs cells, we utilized the Grasshopper plugin for modeling and optimization purposes. To assess radiation, weather data, solar energy levels and other factors, we relied on the Ladybug and Honeybee plugins. We have shown that silicon solar cells may not always be the choice for meeting electricity demands, particularly when higher power output is required. Therefore, when it comes to power consumption and the available surface area for photovoltaic (PV) installation, it may be necessary to consider efficient solar cell options, like GaAs solar cells. By considering the building requirements and utilizing GaAs technology, we were able to optimize the PV surface area.Keywords: gallium arsenide (GaAs), optimization, sustainable building, GaAs solar cells
Procedia PDF Downloads 1013178 Recent Advances in Data Warehouse
Authors: Fahad Hanash Alzahrani
Abstract:
This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing
Procedia PDF Downloads 4063177 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground
Authors: Bhim Kumar Dahal
Abstract:
Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies. Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication. And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.Keywords: cement, improvement, physical properties, strength
Procedia PDF Downloads 1783176 Refining Scheme Using Amphibious Epistemologies
Authors: David Blaine, George Raschbaum
Abstract:
The evaluation of DHCP has synthesized SCSI disks, and current trends suggest that the exploration of e-business that would allow for further study into robots will soon emerge. Given the current status of embedded algorithms, hackers worldwide obviously desire the exploration of replication, which embodies the confusing principles of programming languages. In our research we concentrate our efforts on arguing that erasure coding can be made "fuzzy", encrypted, and game-theoretic.Keywords: SCHI disks, robot, algorithm, hacking, programming language
Procedia PDF Downloads 4343175 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 1283174 Predictive Maintenance of Industrial Shredders: Efficient Operation through Real-Time Monitoring Using Statistical Machine Learning
Authors: Federico Pittino, Thomas Arnold
Abstract:
The shredding of waste materials is a key step in the recycling process towards the circular economy. Industrial shredders for waste processing operate in very harsh operating conditions, leading to the need for frequent maintenance of critical components. Maintenance optimization is particularly important also to increase the machine’s efficiency, thereby reducing the operational costs. In this work, a monitoring system has been developed and deployed on an industrial shredder located at a waste recycling plant in Austria. The machine has been monitored for one year, and methods for predictive maintenance have been developed for two key components: the cutting knives and the drive belt. The large amount of collected data is leveraged by statistical machine learning techniques, thereby not requiring very detailed knowledge of the machine or its live operating conditions. The results show that, despite the wide range of operating conditions, a reliable estimate of the optimal time for maintenance can be derived. Moreover, the trade-off between the cost of maintenance and the increase in power consumption due to the wear state of the monitored components of the machine is investigated. This work proves the benefits of real-time monitoring system for the efficient operation of industrial shredders.Keywords: predictive maintenance, circular economy, industrial shredder, cost optimization, statistical machine learning
Procedia PDF Downloads 1313173 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management
Authors: M. Graus, K. Westhoff, X. Xu
Abstract:
The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation
Procedia PDF Downloads 439