Search results for: five-phase asynchronous machine
536 Voting Representation in Social Networks Using Rough Set Techniques
Authors: Yasser F. Hassan
Abstract:
Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices
Procedia PDF Downloads 393535 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite
Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul
Abstract:
The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate
Procedia PDF Downloads 300534 Decision-Making Strategies on Smart Dairy Farms: A Review
Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh
Abstract:
Farm management and operations will drastically change due to access to real-time data, real-time forecasting, and tracking of physical items in combination with Internet of Things developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm-based management and decision-making does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyse on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue, and environmental impact. Evolutionary computing can be very effective in finding the optimal combination of sets of some objects and, finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and evolutionary computing in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management, and its uptake has become a continuing trend.Keywords: big data, evolutionary computing, cloud, precision technologies
Procedia PDF Downloads 189533 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes
Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo
Abstract:
Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation
Procedia PDF Downloads 206532 Accuracy/Precision Evaluation of Excalibur I: A Neurosurgery-Specific Haptic Hand Controller
Authors: Hamidreza Hoshyarmanesh, Benjamin Durante, Alex Irwin, Sanju Lama, Kourosh Zareinia, Garnette R. Sutherland
Abstract:
This study reports on a proposed method to evaluate the accuracy and precision of Excalibur I, a neurosurgery-specific haptic hand controller, designed and developed at Project neuroArm. Having an efficient and successful robot-assisted telesurgery is considerably contingent on how accurate and precise a haptic hand controller (master/local robot) would be able to interpret the kinematic indices of motion, i.e., position and orientation, from the surgeon’s upper limp to the slave/remote robot. A proposed test rig is designed and manufactured according to standard ASTM F2554-10 to determine the accuracy and precision range of Excalibur I at four different locations within its workspace: central workspace, extreme forward, far left and far right. The test rig is metrologically characterized by a coordinate measuring machine (accuracy and repeatability < ± 5 µm). Only the serial linkage of the haptic device is examined due to the use of the Structural Length Index (SLI). The results indicate that accuracy decreases by moving from the workspace central area towards the borders of the workspace. In a comparative study, Excalibur I performs on par with the PHANToM PremiumTM 3.0 and more accurate/precise than the PHANToM PremiumTM 1.5. The error in Cartesian coordinate system shows a dominant component in one direction (δx, δy or δz) for the movements on horizontal, vertical and inclined surfaces. The average error magnitude of three attempts is recorded, considering all three error components. This research is the first promising step to quantify the kinematic performance of Excalibur I.Keywords: accuracy, advanced metrology, hand controller, precision, robot-assisted surgery, tele-operation, workspace
Procedia PDF Downloads 336531 Enhancing Vehicle Efficiency Through Vapor Absorption Refrigeration Systems
Authors: Yoftahe Nigussie Worku
Abstract:
This paper explores the utilization of vapor absorption refrigeration systems (VARS) as an alternative to the conventional vapor compression refrigerant systems (VCRS) in vehicle air conditioning (AC) systems. Currently, most vehicles employ VCRS, which relies on engine power to drive the compressor, leading to additional fuel consumption. In contrast, VARS harnesses low-grade heat, specifically from the exhaust of high-power internal combustion engines, reducing the burden on the vehicle's engine. The historical development of vapor absorption technology is outlined, dating back to Michael Faraday's discovery in 1824 and the subsequent creation of the first vapor absorption refrigeration machine by Ferdinand Carre in 1860. The paper delves into the fundamental principles of VARS, emphasizing the replacement of mechanical processes with physicochemical interactions, utilizing heat rather than mechanical work. The study compares the basic concepts of the current vapor compression systems with the proposed vapor absorption systems, highlighting the efficiency gains achieved by eliminating the need for engine-driven compressors. The vapor absorption refrigeration cycle (VARC) is detailed, focusing on the generator's role in separating and vaporizing ammonia, chosen for its low-temperature evaporation characteristics. The project's statement underscores the need for increased efficiency in vehicle AC systems beyond the limitations of VCRS. By introducing VARS, driven by low-grade heat, the paper advocates for a reduction in engine power consumption and, consequently, a decrease in fuel usage. This research contributes to the ongoing efforts to enhance sustainability and efficiency in automotive climate control systems.Keywords: VCRS, VARS, efficiency, sustainability
Procedia PDF Downloads 74530 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay
Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer
Abstract:
Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM
Procedia PDF Downloads 82529 Relevance of Brain Stem Evoked Potential in Diagnosis of Central Demyelination in Guillain Barre’ Syndrome
Authors: Geetanjali Sharma
Abstract:
Guillain Barre’ syndrome (GBS) is an auto-immune mediated demyelination poly-radiculo-neuropathy. Clinical features include progressive symmetrical ascending muscle weakness of more than two limbs, areflexia with or without sensory, autonomic and brainstem abnormalities, the purpose of this study was to determine subclinical neurological changes of CNS with GBS and to establish the presence of central demyelination in GBS. The study was prospective and conducted in the Department of Physiology, Pt. B. D. Sharma Post-graduate Institute of Medical Sciences, University of Health Sciences, Rohtak, Haryana, India to find out early central demyelination in clinically diagnosed patients of GBS. These patients were referred from the department of Medicine of our Institute to our department for electro-diagnostic evaluation. The study group comprised of 40 subjects (20 clinically diagnosed GBS patients and 20 healthy individuals as controls) aged between 6-65 years. Brain Stem evoked Potential (BAEP) were done in both groups using RMS EMG EP mark II machine. BAEP parameters included the latencies of waves I to IV, inter peak latencies I-III, III-IV & I-V. Statistically significant increase in absolute peak and inter peak latencies in the GBS group as compared with control group was noted. Results of evoked potential reflect impairment of auditory pathways probably due to focal demyelination in Schwann cell derived myelin sheaths that cover the extramedullary portion of auditory nerves. Early detection of the sub-clinical abnormalities is important as timely intervention reduces morbidity.Keywords: brainstem, demyelination, evoked potential, Guillain Barre’
Procedia PDF Downloads 302528 Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles in the Kidney Disease
Authors: Leonardo C. Pacheco-Londoño, Nataly J Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta-Hoyos, Elkin Navarro, Gustavo Aroca-Martinez, Karin Rondón-Payares, Alberto C. Espinosa-Garavito, Samuel P. Hernández-Rivera
Abstract:
At the Life Science Research Center at Simon Bolivar University, a primary focus is the diagnosis of various diseases, and the use of gold nanoparticles (Au-NPs) in diverse biomedical applications is continually expanding. In the present study, Au-NPs were employed as substrates for Surface-Enhanced Raman Spectroscopy (SERS) aimed at diagnosing kidney diseases arising from Lupus Nephritis (LN), preeclampsia (PC), and Hypertension (H). Discrimination models were developed for distinguishing patients with and without kidney diseases based on the SERS signals from urine samples by partial least squares-discriminant analysis (PLS-DA). A comparative study of the Raman signals across the three conditions was conducted, leading to the identification of potential metabolite signals. Model performance was assessed through cross-validation and external validation, determining parameters like sensitivity and specificity. Additionally, a secondary analysis was performed using machine learning (ML) models, wherein different ML algorithms were evaluated for their efficiency. Models’ validation was carried out using cross-validation and external validation, and other parameters were determined, such as sensitivity and specificity; the models showed average values of 0.9 for both parameters. Additionally, it is not possible to highlight this collaborative effort involved two university research centers and two healthcare institutions, ensuring ethical treatment and informed consent of patient samples.Keywords: SERS, Raman, PLS-DA, kidney diseases
Procedia PDF Downloads 45527 Improving Efficiency and Effectiveness of FMEA Studies
Authors: Joshua Loiselle
Abstract:
This paper discusses the challenges engineering teams face in conducting Failure Modes and Effects Analysis (FMEA) studies. This paper focuses on the specific topic of improving the efficiency and effectiveness of FMEA studies. Modern economic needs and increased business competition require engineers to constantly develop newer and better solutions within shorter timeframes and tighter margins. In addition, documentation requirements for meeting standards/regulatory compliance and customer needs are becoming increasingly complex and verbose. Managing open actions and continuous improvement activities across all projects, product variations, and processes in addition to daily engineering tasks is cumbersome, time consuming, and is susceptible to errors, omissions, and non-conformances. FMEA studies are proven methods for improving products and processes while subsequently reducing engineering workload and improving machine and resource availability through a pre-emptive, systematic approach of identifying, analyzing, and improving high-risk components. If implemented correctly, FMEA studies significantly reduce costs and improve productivity. However, the value of an effective FMEA is often shrouded by a lack of clarity and structure, misconceptions, and previous experiences and, as such, FMEA studies are frequently grouped with the other required information and documented retrospectively in preparation of customer requirements or audits. Performing studies in this way only adds cost to a project and perpetuates the misnomer that FMEA studies are not value-added activities. This paper discusses the benefits of effective FMEA studies, the challenges related to conducting FMEA studies, best practices for efficiently overcoming challenges via structure and automation, and the benefits of implementing those practices.Keywords: FMEA, quality, APQP, PPAP
Procedia PDF Downloads 304526 An Explanatory Study Approach Using Artificial Intelligence to Forecast Solar Energy Outcome
Authors: Agada N. Ihuoma, Nagata Yasunori
Abstract:
Artificial intelligence (AI) techniques play a crucial role in predicting the expected energy outcome and its performance, analysis, modeling, and control of renewable energy. Renewable energy is becoming more popular for economic and environmental reasons. In the face of global energy consumption and increased depletion of most fossil fuels, the world is faced with the challenges of meeting the ever-increasing energy demands. Therefore, incorporating artificial intelligence to predict solar radiation outcomes from the intermittent sunlight is crucial to enable a balance between supply and demand of energy on loads, predict the performance and outcome of solar energy, enhance production planning and energy management, and ensure proper sizing of parameters when generating clean energy. However, one of the major problems of forecasting is the algorithms used to control, model, and predict performances of the energy systems, which are complicated and involves large computer power, differential equations, and time series. Also, having unreliable data (poor quality) for solar radiation over a geographical location as well as insufficient long series can be a bottleneck to actualization. To overcome these problems, this study employs the anaconda Navigator (Jupyter Notebook) for machine learning which can combine larger amounts of data with fast, iterative processing and intelligent algorithms allowing the software to learn automatically from patterns or features to predict the performance and outcome of Solar Energy which in turns enables the balance of supply and demand on loads as well as enhance production planning and energy management.Keywords: artificial Intelligence, backward elimination, linear regression, solar energy
Procedia PDF Downloads 157525 Determination of Selected Engineering Properties of Giant Palm Seeds (Borassus Aethiopum) in Relation to Its Oil Potential
Authors: Rasheed Amao Busari, Ahmed Ibrahim
Abstract:
The engineering properties of giant palms are crucial for the reasonable design of the processing and handling systems. The research was conducted to investigate some engineering properties of giant palm seeds in relation to their oil potential. The ripe giant palm fruit was sourced from some parts of Zaria in Kaduna State and Ado Ekiti in Ekiti State, Nigeria. The mesocarps of the fruits collected were removed to obtain the nuts, while the collected nuts were dried under ambient conditions for several days. The actual moisture content of the nuts at the time of the experiment was determined using KT100S Moisture Meter, with moisture content ranged 17.9% to 19.15%. The physical properties determined are axial dimension, geometric mean diameter, arithmetic mean diameter, sphericity, true and bulk densities, porosity, angles of repose, and coefficients of friction. The nuts were measured using a vernier caliper for physical assessment of their sizes. The axial dimensions of 100 nuts were taken and the result shows that the size ranges from 7.30 to 9.32cm for major diameter, 7.2 to 8.9 cm for intermediate diameter, and 4.2 to 6.33 for minor diameter. The mechanical properties determined were compressive force, compressive stress, and deformation both at peak and break using Instron hydraulic universal tensile testing machine. The work also revealed that giant palm seed can be classified as an oil-bearing seed. The seed gave 18% using the solvent extraction method. The results obtained from the study will help in solving the problem of equipment design, handling, and further processing of the seeds.Keywords: giant palm seeds, engineering properties, oil potential, moisture content, and giant palm fruit
Procedia PDF Downloads 77524 Effect of Shot Peening on the Mechanical Properties for Welded Joints of Aluminium Alloy 6061-T6
Authors: Muna Khethier Abbass, Khairia Salman Hussan, Huda Mohummed AbdudAlaziz
Abstract:
This work aims to study the effect of shot peening on the mechanical properties of welded joints which performed by two different welding processes: Tungsten inert gas (TIG) welding and friction stir welding (FSW) processes of aluminum alloy 6061 T6. Arc welding process (TIG) was carried out on the sheet with dimensions of (100x50x6 mm) to obtain many welded joints with using electrode type ER4043 (AlSi5) as a filler metal and argon as shielding gas. While the friction stir welding process was carried out using CNC milling machine with a tool of rotational speed (1000 rpm) and welding speed of (20 mm/min) to obtain the same butt welded joints. The welded pieces were tested by X-ray radiography to detect the internal defects and faulty welded pieces were excluded. Tensile test specimens were prepared from welded joints and base alloy in the dimensions according to ASTM17500 and then subjected to shot peening process using steel ball of diameter 0.9 mm and for 15 min. All specimens were subjected to Vickers hardness test and micro structure examination to study the effect of welding process (TIG and FSW) on the micro structure of the weld zones. Results showed that a general decay of mechanical properties of TIG and FSW welded joints comparing with base alloy while the FSW welded joint gives better mechanical properties than that of TIG welded joint. This is due to the micro structure changes during the welding process. It has been found that the surface hardening by shot peening improved the mechanical properties of both welded joints, this is due to the compressive residual stress generation in the weld zones which was measured using X-Ray diffraction (XRD) inspection.Keywords: friction stir welding, TIG welding, mechanical properties, shot peening
Procedia PDF Downloads 339523 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 70522 Effect of High Intensity Ultrasonic Treatment on the Micro Structure, Corrosion and Mechanical Behavior of ac4c Aluminium Alloy
Authors: A.Farrag Farrag, A. M. El-Aziz Abdel Aziz, W. Khlifa Khlifa
Abstract:
Ultrasonic treatment is a promising process nowadays in the engineering field due to its high efficiency and it is a low-cost process. It enhances mechanical properties, corrosion resistance, and homogeneity of the microstructure. In this study, the effect of ultrasonic treatment and several casting conditions on microstructure, hardness and corrosion behavior of AC4C aluminum alloy was examined. Various ultrasonic treatments of the AC4C alloys were carried out to prepare billets for thixocasting process. Treatment temperatures varied from about 630oC and cooled down to under ultrasonic field. Treatment time was about 90s. A 600-watts ultrasonic system with 19.5 kHz and intensity of 170 W/cm2 was used. Billets were reheated to semisolid state and held for 5 minutes at 582 oC and temperatures (soaking) using high-frequency induction system, then thixocasted using a die casting machine. Microstructures of the thixocast parts were studied using optical and SEM microscopes. On the other hand, two samples were conventionally cast and poured at 634 oC and 750 oC. The microstructure showed a globular none dendritic grains for AC4C with the application of UST at 630-582 oC, Less dendritic grains when the sample was conventionally cast without the application of UST and poured at 624 oC and a fully dendritic microstructure When the sample was cast and poured at 750 oC without UST .The ultrasonic treatment during solidification proved that it has a positive influence on the microstructure as it produced the finest and globular grains thus it is expected to increase the mechanical properties of the alloy. Higher values of corrosion resistance and hardness were recorded for the ultrasound-treated sample in comparison to cast one.Keywords: ultrasonic treatment, aluminum alloys, corrosion behaviour, mechanical behaviour, microstructure
Procedia PDF Downloads 353521 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli
Authors: Joanna Maj
Abstract:
Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering
Procedia PDF Downloads 64520 Modeling of a Pilot Installation for the Recovery of Residual Sludge from Olive Oil Extraction
Authors: Riad Benelmir, Muhammad Shoaib Ahmed Khan
Abstract:
The socio-economic importance of the olive oil production is significant in the Mediterranean region, both in terms of wealth and tradition. However, the extraction of olive oil generates huge quantities of wastes that may have a great impact on land and water environment because of their high phytotoxicity. Especially olive mill wastewater (OMWW) is one of the major environmental pollutants in olive oil industry. This work projects to design a smart and sustainable integrated thermochemical catalytic processes of residues from olive mills by hydrothermal carbonization (HTC) of olive mill wastewater (OMWW) and fast pyrolysis of olive mill wastewater sludge (OMWS). The byproducts resulting from OMWW-HTC treatment are a solid phase enriched in carbon, called biochar and a liquid phase (residual water with less dissolved organic and phenolic compounds). HTC biochar can be tested as a fuel in combustion systems and will also be utilized in high-value applications, such as soil bio-fertilizer and as catalyst or/and catalyst support. The HTC residual water is characterized, treated and used in soil irrigation since the organic and the toxic compounds will be reduced under the permitted limits. This project’s concept includes also the conversion of OMWS to a green diesel through a catalytic pyrolysis process. The green diesel is then used as biofuel in an internal combustion engine (IC-Engine) for automotive application to be used for clean transportation. In this work, a theoretical study is considered for the use of heat from the pyrolysis non-condensable gases in a sorption-refrigeration machine for pyrolysis gases cooling and condensation of bio-oil vapors.Keywords: biomass, olive oil extraction, adsorption cooling, pyrolisis
Procedia PDF Downloads 90519 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation
Authors: Harini Chakkera
Abstract:
Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.Keywords: kidney, transplant, diabetes, insulin
Procedia PDF Downloads 90518 Remote Sensing through Deep Neural Networks for Satellite Image Classification
Authors: Teja Sai Puligadda
Abstract:
Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss
Procedia PDF Downloads 159517 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 497516 A High Content Screening Platform for the Accurate Prediction of Nephrotoxicity
Authors: Sijing Xiong, Ran Su, Lit-Hsin Loo, Daniele Zink
Abstract:
The kidney is a major target for toxic effects of drugs, industrial and environmental chemicals and other compounds. Typically, nephrotoxicity is detected late during drug development, and regulatory animal models could not solve this problem. Validated or accepted in silico or in vitro methods for the prediction of nephrotoxicity are not available. We have established the first and currently only pre-validated in vitro models for the accurate prediction of nephrotoxicity in humans and the first predictive platforms based on renal cells derived from human pluripotent stem cells. In order to further improve the efficiency of our predictive models, we recently developed a high content screening (HCS) platform. This platform employed automated imaging in combination with automated quantitative phenotypic profiling and machine learning methods. 129 image-based phenotypic features were analyzed with respect to their predictive performance in combination with 44 compounds with different chemical structures that included drugs, environmental and industrial chemicals and herbal and fungal compounds. The nephrotoxicity of these compounds in humans is well characterized. A combination of chromatin and cytoskeletal features resulted in high predictivity with respect to nephrotoxicity in humans. Test balanced accuracies of 82% or 89% were obtained with human primary or immortalized renal proximal tubular cells, respectively. Furthermore, our results revealed that a DNA damage response is commonly induced by different PTC-toxicants with diverse chemical structures and injury mechanisms. Together, the results show that the automated HCS platform allows efficient and accurate nephrotoxicity prediction for compounds with diverse chemical structures.Keywords: high content screening, in vitro models, nephrotoxicity, toxicity prediction
Procedia PDF Downloads 313515 Characteristics of the Particle Size Distribution and Exposure Concentrations of Nanoparticles Generated from the Laser Metal Deposition Process
Authors: Yu-Hsuan Liu, Ying-Fang Wang
Abstract:
The objectives of the present study are to characterize nanoparticles generated from the laser metal deposition (LMD) process and to estimate particle concentrations deposited in the head (H), that the tracheobronchial (TB) and alveolar (A) regions, respectively. The studied LMD chamber (3.6m × 3.8m × 2.9m) is installed with a robot laser metal deposition machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling inside the chamber for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L / min, respectively. The resultant size distributions were used to predict depositions of nanoparticles at the H, TB, and A regions of the respiratory tract using the UK National Radiological Protection Board’s (NRPB’s) LUDEP Software. Result that the number concentrations of nanoparticles in indoor background and LMD chamber were 4.8×10³ and 4.3×10⁵ # / cm³, respectively. However, the nanoparticles emitted from the LMD process was in the form of the uni-modal with number median diameter (NMD) and geometric standard deviation (GSD) as 142nm and 1.86, respectively. The fractions of the nanoparticles deposited on the alveolar region (A: 69.8%) were higher than the other two regions of the head region (H: 10.9%), tracheobronchial region (TB: 19.3%). This study conducted static sampling to measure the nanoparticles in the LMD process, and the results show that the fraction of particles deposited on the A region was higher than the other two regions. Therefore, applying the characteristics of nanoparticles emitted from LMD process could be provided valuable scientific-based evidence for exposure assessments in the future.Keywords: exposure assessment, laser metal deposition process, nanoparticle, respiratory region
Procedia PDF Downloads 284514 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 181513 Performance Analysis of Pumps-as-Turbine Under Cavitating Conditions
Authors: Calvin Stephen, Biswajit Basu, Aonghus McNabola
Abstract:
Market liberalization in the power sector has led to the emergence of micro-hydropower schemes that are dependent on the use of pumps-as-turbines in applications that were not suitable as potential hydropower sites in earlier years. These applications include energy recovery in water supply networks, sewage systems, irrigation systems, alcohol breweries, underground mining and desalination plants. As a result, there has been an accelerated adoption of pumpsas-turbine technology due to the economic advantages it presents in comparison to the conventional turbines in the micro-hydropower space. The performance of this machines under cavitation conditions, however, is not well understood as there is a deficiency of knowledge in literature focused on their turbine mode of operation. In hydraulic machines, cavitation is a common occurrence which needs to be understood to safeguard them and prolong their operation life. The overall purpose of this study is to investigate the effects of cavitation on the performance of a pumps-as-turbine system over its entire operating range. At various operating speeds, the cavitating region is identified experimentally while monitoring the effects this has on the power produced by the machine. Initial results indicate occurrence of cavitation at higher flow rates for lower operating speeds and at lower flow rates at higher operating speeds. This implies that for cavitation free operation, low speed pumps-as-turbine must be used for low flow rate conditions whereas for sites with higher flow rate conditions high speed turbines should be adopted. Such a complete understanding of pumps-as-turbine suction performance can aid avoid cavitation induced failures hence improved reliability of the micro-hydropower plant.Keywords: cavitation, micro-hydropower, pumps-as-turbine, system design
Procedia PDF Downloads 119512 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 80511 Energy Production with Closed Methods
Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani
Abstract:
In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.Keywords: energy, heating, atmosphere, waste, gasification
Procedia PDF Downloads 235510 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health
Authors: Irfan Ahmad Afip
Abstract:
This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression
Procedia PDF Downloads 115509 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks
Authors: Adrian Ionita, Ana-Maria Ghimes
Abstract:
The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling
Procedia PDF Downloads 163508 Automation of Pneumatic Seed Planter for System of Rice Intensification
Authors: Tukur Daiyabu Abdulkadir, Wan Ishak Wan Ismail, Muhammad Saufi Mohd Kassim
Abstract:
Seed singulation and accuracy in seed spacing are the major challenges associated with the adoption of mechanical seeder for system of rice intensification. In this research the metering system of a pneumatic planter was modified and automated for increase precision to meet the demand of system of rice intensification SRI. The chain and sprocket mechanism of a conventional vacuum planter were now replaced with an electro mechanical system made up of a set of servo motors, limit switch, micro controller and a wheel divided into 10 equal angles. The circumference of the planter wheel was determined based on which seed spacing was computed and mapped to the angles of the metering wheel. A program was then written and uploaded to arduino micro controller and it automatically turns the seed plates for seeding upon covering the required distance. The servo motor was calibrated with the aid of labVIEW. The machine was then calibrated using a grease belt and varying the servo rpm through voltage variation between 37 rpm to 47 rpm until an optimum value of 40 rpm was obtained with a forward speed of 5 kilometers per hour. A pressure of 1.5 kpa was found to be optimum under which no skip or double was recorded. Precision in spacing (coefficient of variation), miss index, multiple index, doubles and skips were investigated. No skip or double was recorded both at laboratory and field levels. The operational parameters under consideration were both evaluated at laboratory and field. Even though there was little variation between the laboratory and field values of precision in spacing, multiple index and miss index, the different is not significant as both laboratory and field values fall within the acceptable range.Keywords: automation, calibration, pneumatic seed planter, system of rice intensification
Procedia PDF Downloads 642507 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 21