Search results for: machine failures
1655 Integrating Inference, Simulation and Deduction in Molecular Domain Analysis and Synthesis with Peculiar Attention to Drug Discovery
Authors: Diego Liberati
Abstract:
Standard molecular modeling is traditionally done through Schroedinger equations via the help of powerful tools helping to manage them atom by atom, often needing High Performance Computing. Here, a full portfolio of new tools, conjugating statistical inference in the so called eXplainable Artificial Intelligence framework (in the form of Machine Learning of understandable rules) to the more traditional modeling and simulation control theory of mixed dynamic logic hybrid processes, is offered as quite a general purpose even if making an example to a popular chemical physics set of problems.Keywords: understandable rules ML, k-means, PCA, PieceWise Affine Auto Regression with eXogenous input
Procedia PDF Downloads 291654 Roboweeder: A Robotic Weeds Killer Using Electromagnetic Waves
Authors: Yahoel Van Essen, Gordon Ho, Brett Russell, Hans-Georg Worms, Xiao Lin Long, Edward David Cooper, Avner Bachar
Abstract:
Weeds reduce farm and forest productivity, invade crops, smother pastures and some can harm livestock. Farmers need to spend a significant amount of money to control weeds by means of biological, chemical, cultural, and physical methods. To solve the global agricultural labor shortage and remove poisonous chemicals, a fully autonomous, eco-friendly, and sustainable weeding technology is developed. This takes the form of a weeding robot, ‘Roboweeder’. Roboweeder includes a four-wheel-drive self-driving vehicle, a 4-DOF robotic arm which is mounted on top of the vehicle, an electromagnetic wave generator (magnetron) which is mounted on the “wrist” of the robotic arm, 48V battery packs, and a control/communication system. Cameras are mounted on the front and two sides of the vehicle. Using image processing and recognition, distinguish types of weeds are detected before being eliminated. The electromagnetic wave technology is applied to heat the individual weeds and clusters dielectrically causing them to wilt and die. The 4-DOF robotic arm was modeled mathematically based on its structure/mechanics, each joint’s load, brushless DC motor and worm gear’ characteristics, forward kinematics, and inverse kinematics. The Proportional-Integral-Differential control algorithm is used to control the robotic arm’s motion to ensure the waveguide aperture pointing to the detected weeds. GPS and machine vision are used to traverse the farm and avoid obstacles without the need of supervision. A Roboweeder prototype has been built. Multiple test trials show that Roboweeder is able to detect, point, and kill the pre-defined weeds successfully although further improvements are needed, such as reducing the “weeds killing” time and developing a new waveguide with a smaller waveguide aperture to avoid killing crops surrounded. This technology changes the tedious, time consuming and expensive weeding processes, and allows farmers to grow more, go organic, and eliminate operational headaches. A patent of this technology is pending.Keywords: autonomous navigation, machine vision, precision heating, sustainable and eco-friendly
Procedia PDF Downloads 2521653 Material Fracture Dynamic of Vertical Axis Wind Turbine Blade
Authors: Samir Lecheb, Ahmed Chellil, Hamza Mechakra, Brahim Safi, Houcine Kebir
Abstract:
In this paper we studied fracture and dynamic behavior of vertical axis wind turbine blade, the VAWT is a historical machine, it has many properties, structure, advantage, component to be able to produce the electricity. We modeled the blade design then imported to Abaqus software for analysis the modes shapes, frequencies, stress, strain, displacement and stress intensity factor SIF, after comparison we chose the idol material. Finally, the CTS test of glass epoxy reinforced polymer plates to obtain the material fracture toughness Kc.Keywords: blade, crack, frequency, material, SIF
Procedia PDF Downloads 5501652 Comparative Study between Direct Torque Control and Sliding Mode Control of Sensorless Induction Machine
Authors: Fouad Berrabah, Saad Salah, Zaamouche Fares
Abstract:
In this paper, the Direct Torque Control (DTC) Control and the Sliding Mode Control for induction motor are presented and compared. The performance of the two control schemes is evaluated in terms of torque and current ripple, and transient response to variations of the torque , speed and robustness, trajectory tracking. In order to identify the more suitable solution for any application, both techniques are analyzed mathematically and simulation results are compared which advantages and drawbacks are discussed.Keywords: induction motor, DTC- MRAS control, sliding mode control, robustness, trajectory tracking
Procedia PDF Downloads 5971651 Floodnet: Classification for Post Flood Scene with a High-Resolution Aerial Imaginary Dataset
Authors: Molakala Mourya Vardhan Reddy, Kandimala Revanth, Koduru Sumanth, Beena B. M.
Abstract:
Emergency response and recovery operations are severely hampered by natural catastrophes, especially floods. Understanding post-flood scenarios is essential to disaster management because it facilitates quick evaluation and decision-making. To this end, we introduce FloodNet, a brand-new high-resolution aerial picture collection created especially for comprehending post-flood scenes. A varied collection of excellent aerial photos taken during and after flood occurrences make up FloodNet, which offers comprehensive representations of flooded landscapes, damaged infrastructure, and changed topographies. The dataset provides a thorough resource for training and assessing computer vision models designed to handle the complexity of post-flood scenarios, including a variety of environmental conditions and geographic regions. Pixel-level semantic segmentation masks are used to label the pictures in FloodNet, allowing for a more detailed examination of flood-related characteristics, including debris, water bodies, and damaged structures. Furthermore, temporal and positional metadata improve the dataset's usefulness for longitudinal research and spatiotemporal analysis. For activities like flood extent mapping, damage assessment, and infrastructure recovery projection, we provide baseline standards and evaluation metrics to promote research and development in the field of post-flood scene comprehension. By integrating FloodNet into machine learning pipelines, it will be easier to create reliable algorithms that will help politicians, urban planners, and first responders make choices both before and after floods. The goal of the FloodNet dataset is to support advances in computer vision, remote sensing, and disaster response technologies by providing a useful resource for researchers. FloodNet helps to create creative solutions for boosting communities' resilience in the face of natural catastrophes by tackling the particular problems presented by post-flood situations.Keywords: image classification, segmentation, computer vision, nature disaster, unmanned arial vehicle(UAV), machine learning.
Procedia PDF Downloads 781650 Understand the Concept of Agility for the Manufacturing SMEs
Authors: Adel H. Hejaaji
Abstract:
The need for organisations to be flexible to meet the rapidly changing requirements of their customers is now well appreciated and can be witnessed within companies with their use of techniques such as single-minute exchange of die (SMED) for machine change-over or Kanban as the visual production and inventory control for Just-in-time manufacture and delivery. What is not so well appreciated by companies is the need for agility. Put simply it is the need to be alert for a new and unexpected opportunity and quick to respond with the changes necessary in order to profit from it. This paper aims to study the literature of agility in manufacturing to understand the concept of agility and how it is important and critical for the small and medium size manufacturing organisations (SMEs), and to defined the specific benefits of moving towards agility, and thus what benefit it can bring to an organisation.Keywords: SMEs, agile manufacturing, manufacturing, industrial engineering
Procedia PDF Downloads 6061649 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 1951648 Wealth Creation and its Externalities: Evaluating Economic Growth and Corporate Social Responsibility
Authors: Zhikang Rong
Abstract:
The 4th industrial revolution has introduced technologies like interconnectivity, machine learning, and real-time big data analytics that improve operations and business efficiency. This paper examines how these advancements have led to a concentration of wealth, specifically among the top 1%, and investigates whether this wealth provides value to society. Through analyzing impacts on employment, productivity, supply-demand dynamics, and potential externalities, it is shown that successful businesspeople, by enhancing productivity and creating jobs, contribute positively to long-term economic growth. Additionally, externalities such as environmental degradation are managed by social entrepreneurship and government policies.Keywords: wealth creation, employment, productivity, social entrepreneurship
Procedia PDF Downloads 281647 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity
Authors: Hoda A. Abdel Hafez
Abstract:
Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.Keywords: mining big data, big data, machine learning, telecommunication
Procedia PDF Downloads 4101646 Classification of Emotions in Emergency Call Center Conversations
Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko
Abstract:
The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning
Procedia PDF Downloads 3981645 Assessing the Efficacy of Artificial Intelligence Integration in the FLO Health Application
Authors: Reema Alghamdi, Rasees Aleisa, Layan Sukkar
Abstract:
The primary objective of this research is to conduct an examination of the Flo menstrual cycle application. We do that by evaluating the user experience and their satisfaction with integrated AI features. The study seeks to gather data from primary resources, primarily through surveys, to gather different insights about the application, like its usability functionality in addition to the overall user satisfaction. The focus of our project will be particularly directed towards the impact and user perspectives regarding the integration of artificial intelligence features within the application, contributing to an understanding of the holistic user experience.Keywords: period, women health, machine learning, AI features, menstrual cycle
Procedia PDF Downloads 761644 Safety Testing of Commercial Lithium-Ion Batteries and Failure Modes Analysis
Authors: Romeo Malik, Yashraj Tripathy, Anup Barai
Abstract:
Transportation safety is a major concern for vehicle electrification on a large-scale. The failure cost of lithium-ion batteries is substantial and is significantly impacted by higher liability and replacement cost. With continuous advancement on the material front in terms of higher energy density, upgrading safety characteristics are becoming more crucial for broader integration of lithium-ion batteries. Understanding and impeding thermal runaway is the prime issue for battery safety researchers. In this study, a comprehensive comparison of thermal runaway mechanisms for two different cathode types, Li(Ni₀.₃Co₀.₃Mn₀.₃)O₂ and Li(Ni₀.₈Co₀.₁₅Al₀.₀₅)O₂ is explored. Both the chemistries were studied for different states of charge, and the various abuse scenarios that lead to thermal runaway is investigated. Abuse tests include mechanical abuse, electrical abuse, and thermal abuse. Batteries undergo thermal runaway due to a series of combustible reactions taking place internally; this is observed as multiple jets of flame reaching temperatures of the order of 1000ºC. The physicochemical characterisation was performed on cells, prior to and after abuse. Battery’s state of charge and chemistry have a significant effect on the flame temperature profiles which is otherwise quantified as heat released. Majority of the failures during transportation is due to these external short circuit. Finally, a mitigation approach is proposed to impede the thermal runaway hazard. Transporting lithium-ion batteries under low states of charge is proposed as a way forward. Batteries at low states of charge have demonstrated minimal heat release under thermal runaway reducing the risk of secondary hazards such as thermal runaway propagation.Keywords: battery reliability, lithium-ion batteries, thermal runaway characterisation, tomography
Procedia PDF Downloads 1221643 Reliability Qualification Test Plan Derivation Method for Weibull Distributed Products
Authors: Ping Jiang, Yunyan Xing, Dian Zhang, Bo Guo
Abstract:
The reliability qualification test (RQT) is widely used in product development to qualify whether the product meets predetermined reliability requirements, which are mainly described in terms of reliability indices, for example, MTBF (Mean Time Between Failures). It is widely exercised in product development. In engineering practices, RQT plans are mandatorily referred to standards, such as MIL-STD-781 or GJB899A-2009. But these conventional RQT plans in standards are not preferred, as the test plans often require long test times or have high risks for both producer and consumer due to the fact that the methods in the standards only use the test data of the product itself. And the standards usually assume that the product is exponentially distributed, which is not suitable for a complex product other than electronics. So it is desirable to develop an RQT plan derivation method that safely shortens test time while keeping the two risks under control. To meet this end, for the product whose lifetime follows Weibull distribution, an RQT plan derivation method is developed. The merit of the method is that expert judgment is taken into account. This is implemented by applying the Bayesian method, which translates the expert judgment into prior information on product reliability. Then producer’s risk and the consumer’s risk are calculated accordingly. The procedures to derive RQT plans are also proposed in this paper. As extra information and expert judgment are added to the derivation, the derived test plans have the potential to shorten the required test time and have satisfactory low risks for both producer and consumer, compared with conventional test plans. A case study is provided to prove that when using expert judgment in deriving product test plans, the proposed method is capable of finding ideal test plans that not only reduce the two risks but also shorten the required test time as well.Keywords: expert judgment, reliability qualification test, test plan derivation, producer’s risk, consumer’s risk
Procedia PDF Downloads 1371642 Theoretical Modelling of Molecular Mechanisms in Stimuli-Responsive Polymers
Authors: Catherine Vasnetsov, Victor Vasnetsov
Abstract:
Context: Thermo-responsive polymers are materials that undergo significant changes in their physical properties in response to temperature changes. These polymers have gained significant attention in research due to their potential applications in various industries and medicine. However, the molecular mechanisms underlying their behavior are not well understood, particularly in relation to cosolvency, which is crucial for practical applications. Research Aim: This study aimed to theoretically investigate the phenomenon of cosolvency in long-chain polymers using the Flory-Huggins statistical-mechanical framework. The main objective was to understand the interactions between the polymer, solvent, and cosolvent under different conditions. Methodology: The research employed a combination of Monte Carlo computer simulations and advanced machine-learning methods. The Flory-Huggins mean field theory was used as the basis for the simulations. Spinodal graphs and ternary plots were utilized to develop an initial computer model for predicting polymer behavior. Molecular dynamic simulations were conducted to mimic real-life polymer systems. Machine learning techniques were incorporated to enhance the accuracy and reliability of the simulations. Findings: The simulations revealed that the addition of very low or very high volumes of cosolvent molecules resulted in smaller radii of gyration for the polymer, indicating poor miscibility. However, intermediate volume fractions of cosolvent led to higher radii of gyration, suggesting improved miscibility. These findings provide a possible microscopic explanation for the cosolvency phenomenon in polymer systems. Theoretical Importance: This research contributes to a better understanding of the behavior of thermo-responsive polymers and the role of cosolvency. The findings provide insights into the molecular mechanisms underlying cosolvency and offer specific predictions for future experimental investigations. The study also presents a more rigorous analysis of the Flory-Huggins free energy theory in the context of polymer systems. Data Collection and Analysis Procedures: The data for this study was collected through Monte Carlo computer simulations and molecular dynamic simulations. The interactions between the polymer, solvent, and cosolvent were analyzed using the Flory-Huggins mean field theory. Machine learning techniques were employed to enhance the accuracy of the simulations. The collected data was then analyzed to determine the impact of cosolvent volume fractions on the radii of gyration of the polymer. Question Addressed: The research addressed the question of how cosolvency affects the behavior of long-chain polymers. Specifically, the study aimed to investigate the interactions between the polymer, solvent, and cosolvent under different volume fractions and understand the resulting changes in the radii of gyration. Conclusion: In conclusion, this study utilized theoretical modeling and computer simulations to investigate the phenomenon of cosolvency in long-chain polymers. The findings suggest that moderate cosolvent volume fractions can lead to improved miscibility, as indicated by higher radii of gyration. These insights contribute to a better understanding of the molecular mechanisms underlying cosolvency in polymer systems and provide predictions for future experimental studies. The research also enhances the theoretical analysis of the Flory-Huggins free energy theory.Keywords: molecular modelling, flory-huggins, cosolvency, stimuli-responsive polymers
Procedia PDF Downloads 701641 The Sustainable Governance of Aquifer Injection Using Treated Coal Seam Gas Water in Queensland, Australia: Lessons for Integrated Water Resource Management
Authors: Jacqui Robertson
Abstract:
The sustainable governance of groundwater is of the utmost importance in an arid country like Australia. Groundwater has been relied on by our agricultural and pastoral communities since the State was settled by European colonialists. Nevertheless, the rapid establishment of a coal seam gas (CSG) industry in Queensland, Australia, has had extensive impacts on the pre-existing groundwater users. Managed aquifer recharge of important aquifers in Queensland, Australia, using treated coal seam gas produced water has been used to reduce the impacts of CSG development in Queensland Australia. However, the process has not been widely adopted. Negative environmental outcomes are now acknowledged as not only engineering, scientific or technical problems to be solved but also the result of governance failures. An analysis of the regulatory context for aquifer injection using treated CSG water in Queensland, Australia, using Ostrom’s Common Pool Resource (CPR) theory and a ‘heat map’ designed by the author, highlights the importance of governance arrangements. The analysis reveals the costs and benefits for relevant stakeholders of artificial recharge of groundwater resources in this context. The research also reveals missed opportunities to further active management of the aquifer and resolve existing conflicts between users. The research illustrates the importance of strategically and holistically evaluating innovations in technology that impact water resources to reveal incentives that impact resource user behaviors. The paper presents a proactive step that can be adapted to support integrated water resource management and sustainable groundwater development.Keywords: managed aquifer recharge, groundwater regulation, common-pool resources, integrated water resource management, Australia
Procedia PDF Downloads 2371640 Underwater Image Enhancement and Reconstruction Using CNN and the MultiUNet Model
Authors: Snehal G. Teli, R. J. Shelke
Abstract:
CNN and MultiUNet models are the framework for the proposed method for enhancing and reconstructing underwater images. Multiscale merging of features and regeneration are both performed by the MultiUNet. CNN collects relevant features. Extensive tests on benchmark datasets show that the proposed strategy performs better than the latest methods. As a result of this work, underwater images can be represented and interpreted in a number of underwater applications with greater clarity. This strategy will advance underwater exploration and marine research by enhancing real-time underwater image processing systems, underwater robotic vision, and underwater surveillance.Keywords: convolutional neural network, image enhancement, machine learning, multiunet, underwater images
Procedia PDF Downloads 751639 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 2941638 Corporate Governance Development in Mongolia: The Role of Professional Accountants
Authors: Ernest Nweke
Abstract:
The work of Professional Accountants and Corporate governance are synonymous and cannot be divorced from each other. Organizations, profit and non-profit alike cannot implement sound corporate practices without inputs from Professional Accountants. In today’s dynamic corporate world, good corporate governance practice is a sine qua non. More so, following the corporate failures of the past decades like Enron and WorldCom, governments around the world, including Mongolia are becoming more proactive in ensuring sound corporate governance mechanisms. In the past fifteen years, the Mongolian government has taken several measures to establish and strengthen internal corporate governance structures in firms. This paper highlights the role of professional accountants and auditors play in ensuring that good corporate governance mechanisms are entrenched in listed companies in Mongolia. Both primary and secondary data are utilized in this research. In collection of primary data, Delphi method was used, securing responses from only knowledgeable senior employees, top managers, and some CEOs. Using this method, a total of 107 top-level company employees and executives randomly selected from 22 companies were surveyed; maximum of 5 and minimum of 4 from each company. These companies cut across several sectors. It was concluded that Professional Accountants play key roles in setting and maintaining firm governance. They do this by ensuring full compliance with all the requirements of good and sound corporate governance, establishing reporting, monitoring and evaluating standards, assisting in the setting up of proper controls, efficient and effective audit systems, sound fraud risk management and putting in place an overall vision for the enterprise. Companies with effective corporate governance mechanisms are usually strong and fraud-resilient. It was also discovered that companies with big 4 audit firms tend to have better governance structures in Mongolia.Keywords: accountants, corporate disclosure, corporate failure, corporate governance
Procedia PDF Downloads 2781637 Beyond Possibilities: Re-Reading Republican Ankara
Authors: Zelal Çınar
Abstract:
This paper aims to expose the effects of the ideological program of Turkish Republic on city planning, through the first plan of Ankara. As the new capital, Ankara was planned to be the ‘showcase’ of modern Turkey. It was to represent all new ideologies and the country’s cultural similarities with the west. At the same time it was to underline the national identity and independence of Turkish republic. To this end, a new plan for the capital was designed by German city planner Carl Christopher Lörcher. Diametrically opposed with the existing fabric of the city, this plan was built on the basis of papers and plans, on ideological aims. On the contrary, this paper argues that the city is a machine of possibilities, rather than a clear, materialized system.Keywords: architecture, ideology, modernization, urban planning
Procedia PDF Downloads 2731636 Health Status Monitoring of COVID-19 Patient's through Blood Tests and Naïve-Bayes
Authors: Carlos Arias-Alcaide, Cristina Soguero-Ruiz, Paloma Santos-Álvarez, Adrián García-Romero, Inmaculada Mora-Jiménez
Abstract:
Analysing clinical data with computers in such a way that have an impact on the practitioners’ workflow is a challenge nowadays. This paper provides a first approach for monitoring the health status of COVID-19 patients through the use of some biomarkers (blood tests) and the simplest Naïve Bayes classifier. Data of two Spanish hospitals were considered, showing the potential of our approach to estimate reasonable posterior probabilities even some days before the event.Keywords: Bayesian model, blood biomarkers, classification, health tracing, machine learning, posterior probability
Procedia PDF Downloads 2331635 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 1611634 Causes of Jaundice and Skin Rashes Amongst Children in Selected Rural Communities in the Gambia
Authors: Alhage Drammeh
Abstract:
The research is on the occurrence of certain diseases among children in rural and far-flung parts of the Gambia and the extent to which they are caused by lack of access to clean water. A baseline survey was used to discover, describe, and explain the actual processes. The paper explains the purpose of the research, which is majorly to improve the health condition of children, especially those living in rural communities. The paper also gives a brief overview of the socio-economic situation of The Gambia, emphasizing its status as a Least Developed Country (LDC) and the majority of its population living below the poverty line, with women and children hardest hit. The research used as case studies of two rural communities in the Gambia -Basse Dampha Kunda Village and Foni Besse. Data was collected through oral interviews and medical tests conducted among people in both villages, with an emphasis on children. The demographic detail of those tested is tabulated for a clearer understanding. The results were compared, revealing that skin rashes, hepatitis, and certain other diseases are more prevalent in communities lacking access to safe drinking water. These results were also presented in a tabular form. The study established how some policy failures and neglect on the part of the Government of The Gambia are imperiling the health of many rural dwellers in the country, the most glaring being that the research team was unable to test water samples collected from the two communities, as there are no laboratory reagents for testing water anywhere in The Gambia. Many rural communities lack basic amenities, especially clean and potable water, as well as health facilities. The study findings also highlighted the need for healthcare providers and medical NGOs to voice the plight of rural dwellers and collaborate with the government to set up health facilities in rural areas of The Gambia.Keywords: jaundice, skin rashes, children, rural communities, the Gambia, causes
Procedia PDF Downloads 651633 Use of Waste Tire Rubber Alkali-Activated-Based Mortars in Repair of Concrete Structures
Authors: Mohammad Ebrahim Kianifar, Ehsan Ahmadi
Abstract:
Reinforced concrete structures experience local defects such as cracks over their lifetime under various environmental loadings. Consequently, they are repaired by mortars to avoid detrimental effects such as corrosion of reinforcement, which in long-term may lead to strength loss of a member or collapse of structures. However, repaired structures may need multiple repairs due to changes in load distribution, and thus, lack of compatibility between mortar and substrate concrete. On the other hand, waste tire rubber alkali-activated (WTRAA)-based materials have very high potential to be used as repair mortars because of their ductility and flexibility, which may delay the failure of repair mortar and thus, provide sufficient compatibility. Hence, this work presents a pioneering study on suitability of WTRAA-based materials as mortars for the repair of concrete structures through an experimental program. To this end, WTRAA mortars with 15% aggregate replacement, alkali-activated (AA) mortars, and ordinary mortars are made to repair a number of concrete beams. The WTRAA mortars are composed of slag as base material, sodium hydroxide as an alkaline activator, and different gradations of waste tire rubber (fine and coarse gradations). Flexural tests are conducted on the concrete beams repaired by the ordinary, AA, and WTRAA mortars. It is found that, despite having lower compressive strength and modulus of elasticity, the WTRAA and AA mortars increase the flexural strength of the repaired beams, give compatible failures, and provide sufficient mortar-concrete interface bondings. The ordinary mortars, however, show incompatible failure modes. This study demonstrates the promising application of WTRAA mortars in the practical repairs of concrete structures.Keywords: alkali-activated mortars, concrete repair, mortar compatibility, flexural strength, waste tire rubber
Procedia PDF Downloads 1561632 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 731631 An Algorithm Based on the Nonlinear Filter Generator for Speech Encryption
Authors: A. Belmeguenai, K. Mansouri, R. Djemili
Abstract:
This work present a new algorithm based on the nonlinear filter generator for speech encryption and decryption. The proposed algorithm consists on the use a linear feedback shift register (LFSR) whose polynomial is primitive and nonlinear Boolean function. The purpose of this system is to construct Keystream with good statistical properties, but also easily computable on a machine with limited capacity calculated. This proposed speech encryption scheme is very simple, highly efficient, and fast to implement the speech encryption and decryption. We conclude the paper by showing that this system can resist certain known attacks.Keywords: nonlinear filter generator, stream ciphers, speech encryption, security analysis
Procedia PDF Downloads 2961630 A Machine Learning-Assisted Crime and Threat Intelligence Hunter
Authors: Mohammad Shameel, Peter K. K. Loh, James H. Ng
Abstract:
Cybercrime is a new category of crime which poses a different challenge for crime investigators and incident responders. Attackers can mask their identities using a suite of tools and with the help of the deep web, which makes them difficult to track down. Scouring the deep web manually takes time and is inefficient. There is a growing need for a tool to scour the deep web to obtain useful evidence or intel automatically. In this paper, we will explain the background and motivation behind the research, present a survey of existing research on related tools, describe the design of our own crime/threat intelligence hunting tool prototype, demonstrate its capability with some test cases and lastly, conclude with proposals for future enhancements.Keywords: cybercrime, deep web, threat intelligence, web crawler
Procedia PDF Downloads 1731629 Mood Recognition Using Indian Music
Authors: Vishwa Joshi
Abstract:
The study of mood recognition in the field of music has gained a lot of momentum in the recent years with machine learning and data mining techniques and many audio features contributing considerably to analyze and identify the relation of mood plus music. In this paper we consider the same idea forward and come up with making an effort to build a system for automatic recognition of mood underlying the audio song’s clips by mining their audio features and have evaluated several data classification algorithms in order to learn, train and test the model describing the moods of these audio songs and developed an open source framework. Before classification, Preprocessing and Feature Extraction phase is necessary for removing noise and gathering features respectively.Keywords: music, mood, features, classification
Procedia PDF Downloads 4981628 Creativity as a National System: An Exploratory Model towards Enhance Innovation Ecosystems
Authors: Oscar Javier Montiel Mendez
Abstract:
The link between knowledge-creativity-innovation-entrepreneurship is well established, and broadly emphasized the importance of national innovation systems (NIS) as an approach stresses that the flow of information and technology among people, organizations and institutions are key to its process. Understanding the linkages among the actors involved in innovation is relevant to NIS. Creativity is supposed to fuel NIS, mainly focusing on a personal, group or organizational level, leaving aside the fourth one, as a national system. It is suggested that NIS takes Creativity for granted, an ex-ante stage already solved through some mechanisms, like programs for nurturing it at elementary and secondary schools, universities, or public/organizational specific programs. Or worse, that the individual already has this competence, and that the elements of the NIS will communicate between in a way that will lead to the creation of S curves, with an impact on national systems/programs on entrepreneurship, clusters, and the economy. But creativity constantly appears at any time during NIS, being the key input. Under an initial, exploratory, focused and refined literature review, based on Csikszentmihalyi’s systemic model, Amabile's componential theory, Kaufman and Beghetto’s 4C model, and the OECD’s (Organisation for Economic Co-operation and Development) NIS model (expanded), an NCS theoretical model is elaborated. Its suggested that its implementation could become a significant factor helping strengthen local, regional and national economies. The results also suggest that the establishment of a national creativity system (NCS), something that appears not been previously addressed, as a strategic/vital companion for a NIS, installing it not only as a national education strategy, but as its foundation, managing it and measuring its impact on NIS, entrepreneurship and the rest of the ecosystem, could make more effective public policies. Likewise, it should have a beneficial impact on the efforts of all the stakeholders involved and should help prevent some of the possible failures that NIS present.Keywords: national creativity system, national innovation system, entrepreneurship ecosystem, systemic creativity
Procedia PDF Downloads 4301627 Imputation Technique for Feature Selection in Microarray Data Set
Authors: Younies Saeed Hassan Mahmoud, Mai Mabrouk, Elsayed Sallam
Abstract:
Analysing DNA microarray data sets is a great challenge, which faces the bioinformaticians due to the complication of using statistical and machine learning techniques. The challenge will be doubled if the microarray data sets contain missing data, which happens regularly because these techniques cannot deal with missing data. One of the most important data analysis process on the microarray data set is feature selection. This process finds the most important genes that affect certain disease. In this paper, we introduce a technique for imputing the missing data in microarray data sets while performing feature selection.Keywords: DNA microarray, feature selection, missing data, bioinformatics
Procedia PDF Downloads 5741626 The Development, Validation, and Evaluation of the Code Blue Simulation Module in Improving the Code Blue Response Time among Nurses
Authors: Siti Rajaah Binti Sayed Sultan
Abstract:
Managing the code blue event is stressful for nurses, the patient, and the patient's families. The rapid response from the first and second responders in the code blue event will improve patient outcomes and prevent tissue hypoxia that leads to brain injury and other organ failures. Providing 1 minute for the cardiac massage and 2 minutes for defibrillation will significantly improve patient outcomes. As we know, the American Heart Association came out with guidelines for managing cardiac arrest patients. The hospital must provide competent staff to manage this situation. It can be achieved when the staff is well equipped with the skill, attitude, and knowledge to manage this situation with well-planned strategies, i.e., clear guidelines for managing the code blue event, competent staff, and functional equipment. The code blue simulation (CBS) was chosen in the training program for code blue management because it can mimic real scenarios. Having the code blue simulation module will allow the staff to appreciate what they will face during the code blue event, especially since it rarely happens in that area. This CBS module training will help the staff familiarize themselves with the activities that happened during actual events and be able to operate the equipment accordingly. Being challenged and independent in managing the code blue in the early phase gives the patient a better outcome. The CBS module will help the assessor and the hospital management team with the proper tools and guidelines for managing the code blue drill accordingly. As we know, prompt action will benefit the patient and their family. It also indirectly increases the confidence and job satisfaction among the nurses, increasing the standard of care, reducing the complication and hospital burden, and enhancing cost-effective care.Keywords: code blue simulation module, development of code blue simulation module, code blue response time, code blue drill, cardiorespiratory arrest, managing code blue
Procedia PDF Downloads 67