Search results for: machine vision operating system
20871 An Experimental Study on the Temperature Reduction of Exhaust Gas at a Snorkeling of Submarine
Authors: Seok-Tae Yoon, Jae-Yeong Choi, Gyu-Mok Jeon, Yong-Jin Cho, Jong-Chun Park
Abstract:
Conventional submarines obtain propulsive force by using an electric propulsion system consisting of a diesel generator, battery, motor, and propeller. In the underwater, the submarine uses the electric power stored in the battery. After that, when a certain amount of electric power is consumed, the submarine floats near the sea water surface and recharges the electric power by using the diesel generator. The voyage carried out while charging the power is called a snorkel, and the high-temperature exhaust gas from the diesel generator forms a heat distribution on the sea water surface. The heat distribution is detected by weapon system equipped with thermo-detector and that is the main cause of reducing the survivability of the submarine. In this paper, an experimental study was carried out to establish optimal operating conditions of a submarine for reduction of infrared signature radiated from the sea water surface. For this, a hot gas generating system and a round acrylic water tank with adjustable water level were made. The control variables of the experiment were set as the mass flow rate, the temperature difference between the water and the hot gas in the water tank, and the water level difference between the air outlet and the water surface. The experimental instrumentation used a thermocouple of T-type to measure the released air temperature on the surface of the water, and a thermography system to measure the thermal energy distribution on the water surface. As a result of the experiment study, we analyzed the correlation between the final released temperature of the exhaust pipe exit in a submarine and the depth of the snorkel, and presented reasonable operating conditions for the infrared signature reduction of submarine.Keywords: experiment study, flow rate, infrared signature, snorkeling, thermography
Procedia PDF Downloads 35220870 Designing, Manufacturing and Testing a Portable Tractor Unit Biocoal Harvester Combine of Agriculture and Animal Wastes
Authors: Ali Moharrek, Hosein Mobli, Ali Jafari, Ahmad Tabataee Far
Abstract:
Biomass is a material generally produced by plants living on soil or water and their derivatives. The remains of agricultural and forest products contain biomass which is changeable into fuel. Besides, you can obtain biogas and ethanol from the charcoal produced from biomass through specific actions. this technology was designed for as a useful Native Fuel and Technology in Energy disasters Management Due to the sudden interruption of the flow of heat energy One of the problems confronted by mankind in the future is the limitations of fossil energy which necessitates production of new energies such as biomass. In order to produce biomass from the remains of the plants, different methods shall be applied considering factors like cost of production, production technology, area of requirement, speed of work easy utilization, ect. In this article we are focusing on designing a biomass briquetting portable machine. The speed of installation of the machine on a tractor is estimated as 80 MF 258. Screw press is used in designing this machine. The needed power for running this machine which is estimated as 17.4 kW is provided by the power axis of tractor. The pressing speed of the machine is considered to be 375 RPM Finally the physical and mechanical properties of the product were compared with utilized material which resulted in appropriate outcomes. This machine is designed for Gathering Raw materials of the ground by Head Section. During delivering the raw materials to Briquetting section, they Crushed, Milled & Pre Heated in Transmission section. This machine is a Combine Portable Tractor unit machine and can use all type of Agriculture, Forest & Livestock Animals Resides as Raw material to make Bio fuel. The Briquetting Section was manufactured and it successfully made bio fuel of Sawdust. Also this machine made a biofuel with Ethanol of sugarcane Wastes. This Machine is using P.T.O power source for Briquetting and Hydraulic Power Source for Pre Processing of Row Materials.Keywords: biomass, briquette, screw press, sawdust, animal wastes, portable, tractors
Procedia PDF Downloads 31620869 A Novel Software Model for Enhancement of System Performance and Security through an Optimal Placement of PMU and FACTS
Authors: R. Kiran, B. R. Lakshmikantha, R. V. Parimala
Abstract:
Secure operation of power systems requires monitoring of the system operating conditions. Phasor measurement units (PMU) are the device, which uses synchronized signals from the GPS satellites, and provide the phasors information of voltage and currents at a given substation. The optimal locations for the PMUs must be determined, in order to avoid redundant use of PMUs. The objective of this paper is to make system observable by using minimum number of PMUs & the implementation of stability software at 22OkV grid for on-line estimation of the power system transfer capability based on voltage and thermal limitations and for security monitoring. This software utilizes State Estimator (SE) and synchrophasor PMU data sets for determining the power system operational margin under normal and contingency conditions. This software improves security of transmission system by continuously monitoring operational margin expressed in MW or in bus voltage angles, and alarms the operator if the margin violates a pre-defined threshold.Keywords: state estimator (SE), flexible ac transmission systems (FACTS), optimal location, phasor measurement units (PMU)
Procedia PDF Downloads 41020868 Predictive Machine Learning Model for Assessing the Impact of Untreated Teeth Grinding on Gingival Recession and Jaw Pain
Authors: Joseph Salim
Abstract:
This paper proposes the development of a supervised machine learning system to predict the consequences of untreated bruxism (teeth grinding) on gingival (gum) recession and jaw pain (most often bilateral jaw pain with possible headaches and limited ability to open the mouth). As a general dentist in a multi-specialty practice, the author has encountered many patients suffering from these issues due to uncontrolled bruxism (teeth grinding) at night. The most effective treatment for managing this problem involves wearing a nightguard during sleep and receiving therapeutic Botox injections to relax the muscles (the masseter muscle) responsible for grinding. However, some patients choose to postpone these treatments, leading to potentially irreversible and costlier consequences in the future. The proposed machine learning model aims to track patients who forgo the recommended treatments and assess the percentage of individuals who will experience worsening jaw pain, gingival (gum) recession, or both within a 3-to-5-year timeframe. By accurately predicting these outcomes, the model seeks to motivate patients to address the root cause proactively, ultimately saving time and pain while improving quality of life and avoiding much costlier treatments such as full-mouth rehabilitation to help recover the loss of vertical dimension of occlusion due to shortened clinical crowns because of bruxism, gingival grafts, etc.Keywords: artificial intelligence, machine learning, predictive insights, bruxism, teeth grinding, therapeutic botox, nightguard, gingival recession, gum recession, jaw pain
Procedia PDF Downloads 9320867 Design and Control of a Brake-by-Wire System Using a Permanent Magnet Synchronous Motor
Authors: Daniel S. Gamba, Marc Sánchez, Javier Pérez, Juan J. Castillo, Juan A. Cabrera
Abstract:
The conventional hydraulic braking system operates through the activation of a master cylinder and solenoid valves that distribute and regulate brake fluid flow, adjusting the pressure at each wheel to prevent locking during sudden braking. However, in recent years, there has been a significant increase in the integration of electronic units into various vehicle control systems. In this context, one of the technologies most recently researched is the Brake-by-wire system, which combines electronic, hydraulic, and mechanical technologies to manage braking. This proposal introduces the design and control of a Brake-by-wire system, which will be part of a fully electric and teleoperated vehicle. This vehicle will have independent four-wheel drive, braking, and steering systems. The vehicle will be operated by embedded controllers programmed into a Speedgoat test system, which allows programming through Simulink and real-time capabilities. The braking system comprises all mechanical and electrical components, a vehicle control unit (VCU), and an electronic control unit (ECU). The mechanical and electrical components include a permanent magnet synchronous motor from Odrive and its inverter, the mechanical transmission system responsible for converting torque into pressure, and the hydraulic system that transmits this pressure to the brake caliper. The VCU is responsible for controlling the pressure and communicates with the other components through the CAN protocol, minimizing response times. The ECU, in turn, transmits the information obtained by a sensor installed in the caliper to the central computer, enabling the control loop to continuously regulate pressure by controlling the motor's speed and current. To achieve this, tree controllers are used, operating in a nested configuration for effective control. Since the computer allows programming in Simulink, a digital model of the braking system has been developed in Simscape, which makes it possible to reproduce different operating conditions, faithfully simulate the performance of alternative brake control systems, and compare the results with data obtained in various real tests. These tests involve evaluating the system's response to sinusoidal and square wave inputs at different frequencies, with the results compared to those obtained from conventional braking systems.Keywords: braking, CAN protocol, permanent magnet motor, pressure control
Procedia PDF Downloads 2020866 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System
Authors: J. K. Adedeji, M. O. Oyekanmi
Abstract:
This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.Keywords: biometric characters, facial recognition, neural network, OpenCV
Procedia PDF Downloads 25620865 Player Experience: A Research on Cross-Platform Supported Games
Authors: Salih Akkemik
Abstract:
User Experience has a characterized perspective based on two fundamentals: the usage process and the product. Digital games can be considered as a special interactive system. This system has a very specific purpose and this is to make the player feel good while playing. At this point, Player Experience (PX) and User Experience (UX) are similar. UX focuses on the user feels good, PX focuses on the player feels good. The most important difference between the two is the action taken. These are actions of using and playing. In this study, the player experience will be examined primarily. PX may differ on different platforms. Nowadays, companies are releasing the successful and high-income games that they have developed with cross-platform support. Cross-platform is the most common expression that an application can run on different operating systems, in other words, be developed to support different operating systems. In terms of digital games, cross-platform support means that a game can be played on a computer, console or mobile device environment, more specifically, the game developed is designed and programmed to be played in the same way on at least two different platforms, such as Windows, MacOS, Linux, iOS, Android, Orbis OS or Xbox OS. Different platforms also accommodate different player groups, profiles and preferences. This study aims to examine these different player profiles in terms of player experience and to determine the effects of cross-platform support on player experience.Keywords: cross-platform, digital games, player experience, user experience
Procedia PDF Downloads 20620864 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 16920863 Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models
Authors: Sam Khozama, Ali M. Mayya
Abstract:
Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models.Keywords: machine learning, deep learning, cancer prediction, breast cancer, LSTM, fusion
Procedia PDF Downloads 16320862 Performance Analysis of Shunt Active Power Filter for Various Reference Current Generation Techniques
Authors: Vishal V. Choudhari, Gaurao A. Dongre, S. P. Diwan
Abstract:
A number of reference current generation have been developed for analysis of shunt active power filter to mitigate the load compensation. Depending upon the type of load the technique has to be chosen. In this paper, six reference current generation techniques viz. instantaneous reactive power theory(IRP), Synchronous reference frame theory(SRF), Perfect harmonic cancellation(PHC), Unity power factor method(UPF), Self-tuning filter method(STF), Predictive filtering method(PFM) are compared for different operating conditions. The harmonics are introduced because of non-linear loads in the system. These harmonics are eliminated using above techniques. The results and performance of system simulated on MATLAB/Simulink platform. The system is experimentally implemented using DS1104 card of dSPACE system.Keywords: SAPF, power quality, THD, IRP, SRF, dSPACE module DS1104
Procedia PDF Downloads 59120861 Implementing Total Quality Management in Higher Education
Authors: Abbos Utkirov
Abstract:
Total Quality Management (TQM) in the context of educational institutions requires careful planning and the implementation of an annual quality program to achieve its vision effectively. By applying TQM concepts, the higher education system can experience significant improvements. This study aims to examine TQM in higher education, focusing on Critical Success Factors (CSF) and their implementation across all areas. The study ultimately concludes that CSF and their execution play a crucial role in higher education institutions. Some institutions have already benefited from TQM methods by dedicating themselves to the system and using it to achieve their objectives. Through this review, recent studies shed light on how the TQM system can employ various strategies and hypotheses to empower employees, foster a positive and supportive environment, and emphasize the importance of enabling students to unleash their full potential.Keywords: total quality management (TQM), critical success factor (CSF), organizational performance, quality management practices
Procedia PDF Downloads 8920860 Early Prediction of Diseases in a Cow for Cattle Industry
Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan
Abstract:
In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.Keywords: IoT, machine learning, health care, dairy cows
Procedia PDF Downloads 7120859 Feasibility Study on Hybrid Multi-Stage Direct-Drive Generator for Large-Scale Wind Turbine
Authors: Jin Uk Han, Hye Won Han, Hyo Lim Kang, Tae An Kim, Seung Ho Han
Abstract:
Direct-drive generators for large-scale wind turbine, which are divided into AFPM(Axial Flux Permanent Magnet) and RFPM(Radial Flux Permanent Magnet) type machine, have attracted interest because of a higher energy density in comparison with gear train type generators. Each type of the machines provides distinguishable geometrical features such as narrow width with a large diameter for the AFPM-type machine and wide width with a certain diameter for the RFPM-type machine. When the AFPM-type machine is applied, an increase of electric power production through a multi-stage arrangement in axial direction is easily achieved. On the other hand, the RFPM-type machine can be applied by using its geometric feature of wide width. In this study, a hybrid two-stage direct-drive generator for 6.2MW class wind turbine was proposed, in which the two-stage AFPM-type machine for 5 MW was composed of two models arranged in axial direction with a hollow shape topology of the rotor with annular disc, the stator and the main shaft mounted on coupled slew bearings. In addition, the RFPM-type machine for 1.2MW was installed at the empty space of the rotor. Analytic results obtained from an electro-magnetic and structural interaction analysis showed that the structural weight of the proposed hybrid two-stage direct-drive generator can be achieved as 155tonf in a condition satisfying the requirements of structural behaviors such as allowable air-gap clearance and strength. Therefore, it was sure that the 6.2MW hybrid two-stage direct-drive generator is competitive than conventional generators. (NRF grant funded by the Korea government MEST, No. 2017R1A2B4005405).Keywords: AFPM-type machine, direct-drive generator, electro-magnetic analysis, large-scale wind turbine, RFPM-type machine
Procedia PDF Downloads 16720858 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 20320857 Urban Planning Compilation Problems in China and the Corresponding Optimization Ideas under the Vision of the Hyper-Cycle Theory
Authors: Hong Dongchen, Chen Qiuxiao, Wu Shuang
Abstract:
Systematic science reveals the complex nonlinear mechanisms of behaviour in urban system. However, in China, when the current city planners face with the system, most of them are still taking simple linear thinking to consider the open complex giant system. This paper introduces the hyper-cycle theory, which is one of the basis theories of systematic science, based on the analysis of the reasons why the current urban planning failed, and proposals for optimization ideas that urban planning compilation should change, from controlling quantitative to the changes of relationship, from blueprint planning to progressive planning based on the nonlinear characteristics and from management control to dynamically monitor feedback.Keywords: systematic science, hyper-cycle theory, urban planning, urban management
Procedia PDF Downloads 40620856 EZOB Technology, Biomass Gasification, and Microcogeneration Unit
Authors: Martin Lisý, Marek Baláš, Michal Špiláček, Zdeněk Skála
Abstract:
This paper deals with the issue of biomass and sorted municipal waste gasification and cogeneration using hot air turbo set. It brings description of designed pilot plant with electrical output 80 kWe. The generated gas is burned in secondary combustion chamber located beyond the gas generator. Flue gas flows through the heat exchanger where the compressed air is heated and consequently brought to a micro turbine. Except description, this paper brings our basic experiences from operating of pilot plant (operating parameters, contributions, problems during operating, etc.). The principal advantage of the given cycle is the fact that there is no contact between the generated gas and the turbine. So there is no need for costly and complicated gas cleaning which is the main source of operating problems in direct use in combustion engines because the content of impurities in the gas causes operation problems to the units due to clogging and tarring of working surfaces of engines and turbines, which may lead as far as serious damage to the equipment under operation. Another merit is the compact container package making installation of the facility easier or making it relatively more mobile. We imagine, this solution of cogeneration from biomass or waste can be suitable for small industrial or communal applications, for low output cogeneration.Keywords: biomass, combustion, gasification, microcogeneration
Procedia PDF Downloads 33020855 Biomass Gasification and Microcogeneration Unit–EZOB Technology
Authors: Martin Lisý, Marek Baláš, Michal Špiláček, Zdeněk Skála
Abstract:
This paper deals with the issue of biomass and sorted municipal waste gasification and cogeneration using hot-air turbo-set. It brings description of designed pilot plant with electrical output 80 kWe. The generated gas is burned in secondary combustion chamber located beyond the gas generator. Flue gas flows through the heat exchanger where the compressed air is heated and consequently brought to a micro turbine. Except description, this paper brings our basic experiences from operating of pilot plant (operating parameters, contributions, problems during operating, etc.). The principal advantage of the given cycle is the fact that there is no contact between the generated gas and the turbine. So there is no need for costly and complicated gas cleaning which is the main source of operating problems in direct use in combustion engines because the content of impurities in the gas causes operation problems to the units due to clogging and tarring of working surfaces of engines and turbines, which may lead as far as serious damage to the equipment under operation. Another merit is the compact container package making installation of the facility easier or making it relatively more mobile. We imagine, this solution of cogeneration from biomass or waste can be suitable for small industrial or communal applications, for low output cogeneration.Keywords: biomass, combustion, gasification, microcogeneration
Procedia PDF Downloads 48920854 Prediction of Disability-Adjustment Mental Illness Using Machine Learning
Authors: S. R. M. Krishna, R. Santosh Kumar, V. Kamakshi Prasad
Abstract:
Machine learning techniques are applied for the analysis of the impact of mental illness on the burden of disease. It is calculated using the disability-adjusted life year (DALY). DALYs for a disease is the sum of years of life lost due to premature mortality (YLLs) + No of years of healthy life lost due to disability (YLDs). The critical analysis is done based on the Data sources, machine learning techniques and feature extraction method. The reviewing is done based on major databases. The extracted data is examined using statistical analysis and machine learning techniques were applied. The prediction of the impact of mental illness on the population using machine learning techniques is an alternative approach to the old traditional strategies, which are time-consuming and may not be reliable. The approach makes it necessary for a comprehensive adoption, innovative algorithms, and an understanding of the limitations and challenges. The obtained prediction is a way of understanding the underlying impact of mental illness on the health of the people and it enables us to get a healthy life expectancy. The growing impact of mental illness and the challenges associated with the detection and treatment of mental disorders make it necessary for us to understand the complete effect of it on the majority of the population. Procedia PDF Downloads 3620853 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation
Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin
Abstract:
CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model
Procedia PDF Downloads 30920852 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin
Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele
Abstract:
The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater
Procedia PDF Downloads 10120851 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 13220850 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration
Authors: S. J. Addinell, T. Richard, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis
Procedia PDF Downloads 23020849 Operating Characteristics of Point-of-Care Ultrasound in Identifying Skin and Soft Tissue Abscesses in the Emergency Department
Authors: Sathyaseelan Subramaniam, Jacqueline Bober, Jennifer Chao, Shahriar Zehtabchi
Abstract:
Background: Emergency physicians frequently evaluate skin and soft tissue infections in order to differentiate abscess from cellulitis. This helps determine which patients will benefit from incision and drainage. Our objective was to determine the operating characteristics of point-of-care ultrasound (POCUS) compared to clinical examination in identifying abscesses in emergency department (ED) patients with features of skin and soft tissue infections. Methods: We performed a comprehensive search in the following databases: Medline, Web of Science, EMBASE, CINAHL and Cochrane Library. Trials were included if they compared the operating characteristics of POCUS with clinical examination in identifying skin and soft tissue abscesses. Trials that included patients with oropharyngeal abscesses or that requiring abscess drainage in the operating room were excluded. The presence of an abscess was determined by pus drainage. No pus seen on incision or resolution of symptoms without pus drainage at follow up, determined the absence of an abscess. Quality of included trials was assessed using GRADE criteria. Operating characteristics of POCUS are reported as sensitivity, specificity, positive likelihood (LR+) and negative likelihood (LR-) ratios and the respective 95% confidence intervals (CI). Summary measures were calculated by generating a hierarchical summary receiver operating characteristic model (HSROC). Results: Out of 3203 references identified, 5 observational studies with 615 patients in aggregate were included (2 adults and 3 pediatrics). We rated the quality of 3 trials as low and 2 as very low. The operating characteristics of POCUS and clinical examination in identifying soft tissue abscesses are presented in the table. The HSROC for POCUS revealed a sensitivity of 96% (95% CI = 89-98%), specificity of 79% (95% CI = 71-86), LR+ of 4.6 (95% CI = 3.2-6.8), and LR- of 0.06 (95% CI = 0.02-0.2). Conclusion: Existing evidence indicates that POCUS is useful in identifying abscesses in ED patients with skin or soft tissue infections.Keywords: abscess, point-of-care ultrasound, pocus, skin and soft tissue infection
Procedia PDF Downloads 36920848 Analysis of Photic Zone’s Summer Period-Dissolved Oxygen and Temperature as an Early Warning System of Fish Mass Mortality in Sampaloc Lake in San Pablo, Laguna
Authors: Al Romano, Jeryl C. Hije, Mechaela Marie O. Tabiolo
Abstract:
The decline in water quality is a major factor in aquatic disease outbreaks and can lead to significant mortality among aquatic organisms. Understanding the relationship between dissolved oxygen (DO) and water temperature is crucial, as these variables directly impact the health, behavior, and survival of fish populations. This study investigated how DO levels, water temperature, and atmospheric temperature interact in Sampaloc Lake to assess the risk of fish mortality. By employing a combination of linear regression models and machine learning techniques, researchers developed predictive models to forecast DO concentrations at various depths. The results indicate that while DO levels generally decrease with depth, the predicted concentrations are sufficient to support the survival of common fish species in Sampaloc Lake during March, April, and May 2025.Keywords: aquaculture, dissolved oxygen, water temperature, regression analysis, machine learning, fish mass mortality, early warning system
Procedia PDF Downloads 3620847 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines
Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl
Abstract:
Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. That is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that rely on control-integrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. The paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. The paper starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art on pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general pose-dependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.Keywords: dynamic behavior, lightweight, machine tool, pose-dependency
Procedia PDF Downloads 45920846 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions
Authors: Erva Akin
Abstract:
– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.Keywords: artificial intelligence, copyright, data governance, machine learning
Procedia PDF Downloads 8320845 Modelling the Photovoltaic Pump Output Using Empirical Data from Local Conditions in the Vhembe District
Authors: C. Matasane, C. Dwarika, R. Naidoo
Abstract:
The mathematical analysis on radiation obtained and the development of the solar photovoltaic (PV) array groundwater pumping is needed in the rural areas of Thohoyandou, Limpopo Province for sizing and power performance subject to the climate conditions within the area. A simple methodology approach is developed for the directed coupled solar, controller and submersible ground water pump system. The system consists of a PV array, pump controller and submerged pump, battery backup and charger controller. For this reason, the theoretical solar radiation obtained for optimal predictions and system performance in order to achieve different design and operating parameters. Here the examination of the PV schematic module in a Direct Current (DC) application is used for obtainable maximum solar power energy for water pumping. In this paper, a simple efficient photovoltaic water pumping system is presented with its theoretical studies and mathematical modeling of photovoltaics (PV) system.Keywords: renewable energy sources, solar groundwater pumping, theoretical and mathematical analysis of photovoltaic (PV) system, theoretical solar radiation
Procedia PDF Downloads 37620844 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging
Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.Keywords: breast, machine learning, MRI, radiomics
Procedia PDF Downloads 26720843 Diagnosis of Induction Machine Faults by DWT
Authors: Hamidreza Akbari
Abstract:
In this paper, for detection of inclined eccentricity in an induction motor, time–frequency analysis of the stator startup current is carried out. For this purpose, the discrete wavelet transform is used. Data are obtained from simulations, using winding function approach. The results show the validity of the approach for detecting the fault and discriminating with respect to other faults.Keywords: induction machine, fault, DWT, electric
Procedia PDF Downloads 35020842 Diagnosis of Alzheimer Diseases in Early Step Using Support Vector Machine (SVM)
Authors: Amira Ben Rabeh, Faouzi Benzarti, Hamid Amiri, Mouna Bouaziz
Abstract:
Alzheimer is a disease that affects the brain. It causes degeneration of nerve cells (neurons) and in particular cells involved in memory and intellectual functions. Early diagnosis of Alzheimer Diseases (AD) raises ethical questions, since there is, at present, no cure to offer to patients and medicines from therapeutic trials appear to slow the progression of the disease as moderate, accompanying side effects sometimes severe. In this context, analysis of medical images became, for clinical applications, an essential tool because it provides effective assistance both at diagnosis therapeutic follow-up. Computer Assisted Diagnostic systems (CAD) is one of the possible solutions to efficiently manage these images. In our work; we proposed an application to detect Alzheimer’s diseases. For detecting the disease in early stage we used the three sections: frontal to extract the Hippocampus (H), Sagittal to analysis the Corpus Callosum (CC) and axial to work with the variation features of the Cortex(C). Our method of classification is based on Support Vector Machine (SVM). The proposed system yields a 90.66% accuracy in the early diagnosis of the AD.Keywords: Alzheimer Diseases (AD), Computer Assisted Diagnostic(CAD), hippocampus, Corpus Callosum (CC), cortex, Support Vector Machine (SVM)
Procedia PDF Downloads 384