Search results for: artificial intelligence in medicine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4084

Search results for: artificial intelligence in medicine

2404 Micromechanical Compatibility Between Cells and Scaffold Mediates the Efficacy of Regenerative Medicine

Authors: Li Yang, Yang Song, Martin Y. M. Chiang

Abstract:

Objective: To experimentally substantiate the micromechanical compatibility between cell and scaffold, in the regenerative medicine approach for restoring bone volume, is essential for phenotypic transitions Methods: Through nanotechnology and electrospinning process, nanofibrous scaffolds were fabricated to host dental follicle stem cells (DFSCs). Blends (50:50) of polycaprolactone (PCL) and silk fibroin (SF), mixed with various content of cellulose nanocrystals (CNC, up to 5% in weight), were electrospun to prepare nanofibrous scaffolds with heterogeneous microstructure in terms of fiber size. Colloidal probe atomic force microscopy (AFM) and conventional uniaxial tensile tests measured the scaffold stiffness at the micro-and macro-scale, respectively. The cell elastic modulus and cell-scaffold adhesive interaction (i.e., a chemical function) were examined through single-cell force spectroscopy using AFM. The quantitative reverse transcription-polymerase chain reaction (qRT-PCR) was used to determine if the mechanotransduction signal (i.e., Yap1, Wwr2, Rac1, MAPK8, Ptk2 and Wnt5a) is upregulated by the scaffold stiffness at the micro-scale (cellular scale). Results: The presence of CNC produces fibrous scaffolds with a bimodal distribution of fiber diameter. This structural heterogeneity, which is CNC-composition dependent, remarkably modulates the mechanical functionality of scaffolds at microscale and macroscale simultaneously, but not the chemical functionality (i.e., only a single material property is varied). In in vitro tests, the osteogenic differentiation and gene expression associated with mechano-sensitive cell markers correlate to the degree of micromechanical compatibility between DFSCs and the scaffold. Conclusion: Cells require compliant scaffolds to encourage energetically favorable interactions for mechanotransduction, which are converted into changes in cellular biochemistry to direct the phenotypic evolution. The micromechanical compatibility is indeed important to the efficacy of regenerative medicine.

Keywords: phenotype transition, scaffold stiffness, electrospinning, cellulose nanocrystals, single-cell force spectroscopy

Procedia PDF Downloads 187
2403 CAM Use and Its Association with Quality of Life in a Sample of Lebanese Breast Cancer Patients: A Cross Sectional Study

Authors: Farah Naja, Romy Abi Fadel, Yasmin Aridi, Aya Zarif, Dania Hariri, Mohammad Alameddine, Anas Mugharbel, Maya Khalil, Zeina Nahleh, Arafat Tfayli

Abstract:

The objective of this study is to assess the prevalence and determinants of CAM use among breast cancer patients in Beirut, Lebanon. A secondary objective is to evaluate the association between CAM use and quality of life (QOL). A cross-sectional survey was conducted on 180 breast cancer patients recruited from two major referral centers in Beirut. In a face to face interview, participants completed a questionnaire comprised of three sections: socio-demographic and lifestyle characteristics, breast cancer condition, and CAM use. The assessment of QOL was carried using the FACT-B Arabic version. Prevalence of CAM use since diagnosis was 40%. CAM use was negatively associated with age, treatment at a philanthropic hospital and positively associated with having an advanced stage of disease. The most commonly used CAM was ‘Special food’ followed by ‘Herbal teas’. Only 4% of CAM users cited health care professionals as influencing their choice of CAM. One in four patients disclosed CAM use to their treating physician. There was no significant association between CAM use and QOL. The use of CAM therapies among breast cancer patients is prevalent in Lebanon. Efforts should be dedicated at educating physicians to discuss CAM use with their patients and advising patients to disclose of their use with their physicians.

Keywords: breast cancer , complementary medicine, alternative medicine, lebanon , quality of life

Procedia PDF Downloads 511
2402 GC-MS Analysis of Essential Oil From Satureja Hispidula: A Medicinal Plant from Algeria

Authors: Habiba Rechek, Ammar Haouat, Ratiba Mekkiou, Diana C. G. A. Pinto, Artur M. S. Silva

Abstract:

Satureja hispidula is an aromatic and medicinal plant belonging to the family of Lamiaceae native to Algeria, just like mint or thyme. Although she is less known to the general public than her more famous cousins, this species has many therapeutic properties that have been used for centuries in traditional medicine of some regions. For generations, Satureja hispidula has been used in traditional medicine to treat various ailments, including respiratory diseases and diabetes. Its aroma, often described as close to that of mint, gives it a special interest in aromatherapy. Due to the growing interest in the beneficial properties of plant-derived essential oils, the aim of this study is to analyze the chemical composition of S. hispidula essential oil by gas chromatography coupled with mass spectrometry (GC-MS). Identifying the main constituents of essential oil will allow better understanding its chemical nature and exploring its potential for culinary and therapeutic application. The study of the essential oil of S. hispidula reveals a composition rich in 83 compounds, including menthone, pulegone and piperitone as main constituents. This gas chromatography analysis coupled with mass spectrometry provides valuable information about the chemical nature of this oil. However, more in-depth studies are needed to explore the potentially health-enhancing properties of this essential oil.

Keywords: satureja hispidula, GC-MS, essential oil, menthone, pulegone

Procedia PDF Downloads 27
2401 Phytochemical Investigation and Diuretic Activity of the Palestinian Crataegus aronia in Mice Using an Aqueous Extract

Authors: Belal Rahhal, Isra Taha, Insaf Najajreh, Waleed Basha, Hamzeh Alzabadeh, Ahed Zyoud

Abstract:

Phytochemical Investigation and Diuretic Activity of the Palestinian Crataegus aronia in Mice using an Aqueous Extract Division of Physiology, Pharmacology and Toxicology Faculty of Medicine and Health Sciences An- Najah National University Nablus- Palestine Belal Rahhal, Isra Taha, Insaf Najajreh, Waleed Basha, Hamzeh Alzabadeh and Ahed Zyoud Purpose: Throughout history, various natural materials were used as remedies for treatment of various diseases, and recently a vastly growing and renewed interest in herbal medicine is witnessed globally. In Palestinian folk medicine, Crataegus aronia is used as a diuretic and for treatment of hypertension. This study aimed to assess the preliminary phytochemical properties and the diuretic effect of the aqueous extracts of this plant in mice after its intraperitonial administration. Methods: It is an experimental trial applied on mice (n=8, Male, CD-1, weight range: [25-30 gram]), which are divided into two groups (4 in each). The first group administered with the plant extract (500 mg/kg) , and the second with normal saline as negative control group. Then urine output and electrolyte contents were quantified up to 6 hours for the three groups and then compared to the control one. Results: Preliminary phytochemical screening reveals the presence of tannins, alkaloids and flavoniods as major phytoconstituents in aqueous extract. Significant diuresis was noted in those received the aqueous extract of Crataegus aronia (p < 0.05) compared to controls. Moreover, aqueous extract had an acidic pH and a mild increase in the electrolyte excretion (Na, K). Conclusions: Our results revealed that Crataegus aronia aqueous extract has a potential diuretic effect. Further studies are needed to evaluate this diuretic effect in the relief of diseases characterized by volume overload. Keywords: C. aronia, furosemide, diuresis, mice, medicinal plants.

Keywords: medicinal plants, diuretic activity, mice, C. aronia, , furosemide, , Phytochemical Investigation

Procedia PDF Downloads 195
2400 The National Socialist and Communist Propaganda Activities in the Turkish Press during the World War II

Authors: Asuman Tezcan Mirer

Abstract:

This proposed paper discusses nationalist socialist and communist propaganda struggles in the Turkish press during World War II. The paper aspires to analyze how government agencies directed and organized the Turkish press to prevent the "5th column" from influencing public opinion. During the Second World War, one of the most emphasized issues was propaganda and how Turkish citizens would be protected from the effects of disinformation. Istanbul became a significant headquarters for belligerent countries' intelligence services, and these services were involved in gathering intelligence and disseminating propaganda. The main motive of national socialist propaganda was "anti-communism" in Turkey. Subsidizing certain magazines, controlling German companies' advertisements and paper trade, spreading rumors, printing propaganda brochures, and showing German propaganda films are some tactics that the nationalist socialists applied before and during the Second World War. On the other hand, the communists targeted Turkish racist/ultra-nationalist groups and their publications, which were influenced by the Nazi regime. They were also involved in distributing Marxist publications, printing brochures, and broadcasting radio programs. This study composes of three parts. The first part describes the nationalist socialist and communist propaganda activities in Turkey during the Second World War. The second part addresses the debates over propaganda among selected newspapers representing different ideologies. Finally, the last part analyzes the Turkish government's press policy. It explains why the government allowed ideological debates in the press despite its authoritarian press policy and "active neutrality" stance in the international arena.

Keywords: propaganda, press, 5th column, World War II, Turkey

Procedia PDF Downloads 100
2399 Computational Model of Human Cardiopulmonary System

Authors: Julian Thrash, Douglas Folk, Michael Ciracy, Audrey C. Tseng, Kristen M. Stromsodt, Amber Younggren, Christopher Maciolek

Abstract:

The cardiopulmonary system is comprised of the heart, lungs, and many dynamic feedback mechanisms that control its function based on a multitude of variables. The next generation of cardiopulmonary medical devices will involve adaptive control and smart pacing techniques. However, testing these smart devices on living systems may be unethical and exceedingly expensive. As a solution, a comprehensive computational model of the cardiopulmonary system was implemented in Simulink. The model contains over 240 state variables and over 100 equations previously described in a series of published articles. Simulink was chosen because of its ease of introducing machine learning elements. Initial results indicate that physiologically correct waveforms of pressures and volumes were obtained in the simulation. With the development of a comprehensive computational model, we hope to pioneer the future of predictive medicine by applying our research towards the initial stages of smart devices. After validation, we will introduce and train reinforcement learning agents using the cardiopulmonary model to assist in adaptive control system design. With our cardiopulmonary model, we will accelerate the design and testing of smart and adaptive medical devices to better serve those with cardiovascular disease.

Keywords: adaptive control, cardiopulmonary, computational model, machine learning, predictive medicine

Procedia PDF Downloads 178
2398 A Good Start for Digital Transformation of the Companies: A Literature and Experience-Based Predefined Roadmap

Authors: Batuhan Kocaoglu

Abstract:

Nowadays digital transformation is a hot topic both in service and production business. For the companies who want to stay alive in the following years, they should change how they do their business. Industry leaders started to improve their ERP (Enterprise Resource Planning) like backbone technologies to digital advances such as analytics, mobility, sensor-embedded smart devices, AI (Artificial Intelligence) and more. Selecting the appropriate technology for the related business problem also is a hot topic. Besides this, to operate in the modern environment and fulfill rapidly changing customer expectations, a digital transformation of the business is required and change the way the business runs, affect how they do their business. Even the digital transformation term is trendy the literature is limited and covers just the philosophy instead of a solid implementation plan. Current studies urge firms to start their digital transformation, but few tell us how to do. The huge investments scare companies with blur definitions and concepts. The aim of this paper to solidify the steps of the digital transformation and offer a roadmap for the companies and academicians. The proposed roadmap is developed based upon insights from the literature review, semi-structured interviews, and expert views to explore and identify crucial steps. We introduced our roadmap in the form of 8 main steps: Awareness; Planning; Operations; Implementation; Go-live; Optimization; Autonomation; Business Transformation; including a total of 11 sub-steps with examples. This study also emphasizes four dimensions of the digital transformation mainly: Readiness assessment; Building organizational infrastructure; Building technical infrastructure; Maturity assessment. Finally, roadmap corresponds the steps with three main terms used in digital transformation literacy as Digitization; Digitalization; and Digital Transformation. The resulted model shows that 'business process' and 'organizational issues' should be resolved before technology decisions and 'digitization'. Companies can start their journey with the solid steps, using the proposed roadmap to increase the success of their project implementation. Our roadmap is also adaptable for relevant Industry 4.0 and enterprise application projects. This roadmap will be useful for companies to persuade their top management for investments. Our results can be used as a baseline for further researches related to readiness assessment and maturity assessment studies.

Keywords: digital transformation, digital business, ERP, roadmap

Procedia PDF Downloads 168
2397 Etiological Factors for Renal Cell Carcinoma: Five-Year Study at Mayo Hospital Lahore

Authors: Muhammad Umar Hassan

Abstract:

Renal cell carcinoma is a subset of kidney cancer that arises in the lining of DCT and is present in parenchymal tissue. Diagnosis is based on lab reports, including urinalysis, renal function tests (RFTs), and electrolyte balance, along with imaging techniques. Organ failure and other complications have been commonly observed in these cases. Over the years, the presentation of patients has varied, so carcinoma was classified on the basis of site, shape, and consistency for detailed analysis. Lifestyle patterns and occupational history were inquired about and recorded. Methods: Data from 100 patients presenting to the oncology and nephrology department of Mayo Hospital in the year 2015-2020 were included in this retrospective study on a random basis. The study was specifically focused on three risk factors. Smoking, occupational exposures, and Hakim medicine are taken by the patient for any cause. After procurement of data, follow-up contacts of these patients were established, resulting in a detailed analysis of lifestyle. Conclusion: The inference drawn is a direct causal link between smoking, industrial workplace exposure, and Hakim medicine with the development of Renal Cell Carcinoma. It was shown in the majority of the patients and hence confirmed our hypothesis.

Keywords: renal cell carcinoma, kidney cancer, clear cell carcinoma

Procedia PDF Downloads 101
2396 Chemopreventive Potency of Medicinal and Eatable Plant, Gromwell Seed on in Vitro and in Vivo Carcinogenesis Systems

Authors: Harukuni Tokuda, Xu FengHao, Nobutaka Suzuki

Abstract:

As part of an ongoing our projects to investigate the anti-tumor promoring properties (chemopreventive potency) of Gromwell seed, dry powder materials and its active compounds were carried out through useful test systems. Gromwell seed (Coix lachryma-jobi seed) (GS) is a grass crop that has long been used and played a role in traditional medicine as a nourishing food, and for the treatment of various aliments, paticularly cancer. The application of a new screening procedure which utilizes the synergistic effect of short-chain fatty acids and phorbol esters in enable rapid and easy detection of naturally occurring substances(anti-tumor promoters chemo-preventive agents) with inhibition of Epstein-Barr virus(EBV) activation, using human lymphblastoid cells. In addition, we have now extended these investigations to a new tumorigenesis model in which we initiated the tumors with DMBA intiation and promoted with 1.7 nmol of TPA in two-stage mouse skin test and other models. these results provide a basis for further development of these botanical supplements for human cancer chemoprevention and observations seem that this materials more extensively as one of the trials for the purpose of complementary and alternative medicine.

Keywords: chemoprevention, medicinal plant, mouse, carcinogenesis systems

Procedia PDF Downloads 479
2395 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall

Procedia PDF Downloads 275
2394 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 245
2393 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation

Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran

Abstract:

Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.

Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning

Procedia PDF Downloads 488
2392 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 49
2391 Sterols Regulate the Activity of Phospholipid Scramblase by Interacting through Putative Cholesterol Binding Motif

Authors: Muhasin Koyiloth, Sathyanarayana N. Gummadi

Abstract:

Biological membranes are ordered association of lipids, proteins, and carbohydrates. Lipids except sterols possess asymmetric distribution across the bilayer. Eukaryotic membranes possess a group of lipid translocators called scramblases that disrupt phospholipid asymmetry. Their action is implicated in cell activation during wound healing and phagocytic clearance of apoptotic cells. Cholesterol is one of the major membrane lipids distributed evenly on both the leaflet and can directly influence the membrane fluidity through the ordering effect. The fluidity has an impact on the activity of several membrane proteins. The palmitoylated phospholipid scramblases localized to the lipid raft which is characterized by a higher number of sterols. Here we propose that cholesterol can interact with scramblases through putative CRAC motif and can modulate their activity. To prove this, we reconstituted phospholipid scramblase 1 of C. elegans (SCRM-1) in proteoliposomes containing different amounts of cholesterol (Liquid ordered/Lo). We noted that the presence of cholesterol reduced the scramblase activity of wild-type SCRM-1. The interaction between SCRM-1 and cholesterol was confirmed by fluorescence spectroscopy using NBD-Chol. Also, we observed loss of such interaction when one of I273 in the CRAC motif mutated to Asp. Interestingly, the point mutant has partially retained scramblase activity in Lo vesicles. The current study elucidated the important interaction between cholesterol and SCRM-1 to fine-tune its activity in artificial membranes.

Keywords: artificial membranes, CRAC motif, plasma membrane, PL scramblase

Procedia PDF Downloads 174
2390 Improved Technology Portfolio Management via Sustainability Analysis

Authors: Ali Al-Shehri, Abdulaziz Al-Qasim, Abdulkarim Sofi, Ali Yousef

Abstract:

The oil and gas industry has played a major role in improving the prosperity of mankind and driving the world economy. According to the International Energy Agency (IEA) and Integrated Environmental Assessment (EIA) estimates, the world will continue to rely heavily on hydrocarbons for decades to come. This growing energy demand mandates taking sustainability measures to prolong the availability of reliable and affordable energy sources, and ensure lowering its environmental impact. Unlike any other industry, the oil and gas upstream operations are energy-intensive and scattered over large zonal areas. These challenging conditions require unique sustainability solutions. In recent years there has been a concerted effort by the oil and gas industry to develop and deploy innovative technologies to: maximize efficiency, reduce carbon footprint, reduce CO2 emissions, and optimize resources and material consumption. In the past, the main driver for research and development (R&D) in the exploration and production sector was primarily driven by maximizing profit through higher hydrocarbon recovery and new discoveries. Environmental-friendly and sustainable technologies are increasingly being deployed to balance sustainability and profitability. Analyzing technology and its sustainability impact is increasingly being used in corporate decision-making for improved portfolio management and allocating valuable resources toward technology R&D.This paper articulates and discusses a novel workflow to identify strategic sustainable technologies for improved portfolio management by addressing existing and future upstream challenges. It uses a systematic approach that relies on sustainability key performance indicators (KPI’s) including energy efficiency quotient, carbon footprint, and CO2 emissions. The paper provides examples of various technologies including CCS, reducing water cuts, automation, using renewables, energy efficiency, etc. The use of 4IR technologies such as Artificial Intelligence, Machine Learning, and Data Analytics are also discussed. Overlapping technologies, areas of collaboration and synergistic relationships are identified. The unique sustainability analyses provide improved decision-making on technology portfolio management.

Keywords: sustainability, oil& gas, technology portfolio, key performance indicator

Procedia PDF Downloads 181
2389 Data Analytics in Energy Management

Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair

Abstract:

With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.

Keywords: energy analytics, energy management, operational data, business intelligence, optimization

Procedia PDF Downloads 363
2388 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 90
2387 Artificial Intelligence in Ethiopian Higher Education: The Impact of Digital Readiness Support, Acceptance, Risk, and Trust on Adoption

Authors: Merih Welay Welesilassie

Abstract:

Understanding educators' readiness to incorporate AI tools into their teaching methods requires comprehensively examining the influencing factors. This understanding is crucial, given the potential of these technologies to personalise learning experiences, improve instructional effectiveness, and foster innovative pedagogical approaches. This study evaluated factors affecting teachers' adoption of AI tools in their English language instruction by extending the Technology Acceptance Model (TAM) to encompass digital readiness support, perceived risk, and trust. A cross-sectional quantitative survey was conducted with 128 English language teachers, supplemented by qualitative data collection from 15 English teachers. The structural mode analysis indicated that implementing AI tools in Ethiopian higher education was notably influenced by digital readiness support, perceived ease of use, perceived usefulness, perceived risk, and trust. Digital readiness support positively impacted perceived ease of use, usefulness, and trust while reducing safety and privacy risks. Perceived ease of use positively correlated with perceived usefulness but negatively influenced trust. Furthermore, perceived usefulness strengthened trust in AI tools, while perceived safety and privacy risks significantly undermined trust. Trust was crucial in increasing educators' willingness to adopt AI technologies. The qualitative analysis revealed that the teachers exhibited strong content and pedagogical knowledge but needed more technology-related knowledge. Moreover, It was found that the teachers did not utilise digital tools to teach English. The study identified several obstacles to incorporating digital tools into English lessons, such as insufficient digital infrastructure, a shortage of educational resources, inadequate professional development opportunities, and challenging policies and governance. The findings provide valuable guidance for educators, inform policymakers about creating supportive digital environments, and offer a foundation for further investigation into technology adoption in educational settings in Ethiopia and similar contexts.

Keywords: digital readiness support, AI acceptance, perceived risc, AI trust

Procedia PDF Downloads 17
2386 Antimicrobial Effect of Essential Oil of Plant Schinus molle on Some Bacteria Pathogens

Authors: Mehani Mouna, Ladjel segni

Abstract:

Humans use plants for thousands of years to treat various ailments, In many developing countries, Much of the population relies on traditional doctors and their collections of medicinal plants to cure them. Essential oils have many therapeutic properties. In herbal medicine, They are used for their antiseptic properties against infectious diseases of fungal origin, Against dermatophytes, Those of bacterial origin. The aim of our study is to determine the antimicrobial effect of essential oils of the plant Schinus molle on some pathogenic bacteria. It is a medicinal plant used in traditional therapy. Essential oils have many therapeutic properties. In herbal medicine, They are used for their antiseptic properties against infectious diseases of fungal origin, Against dermatophytes, Those of bacterial origin. The test adopted is based on the diffusion method on solid medium (Antibiogram), This method allows to determine the susceptibility or resistance of an organism according to the sample studied. Our study reveals that the essential oil of the plant Schinus molle has a different effect on the resistance of germs: For Pseudomonas aeruginosa strain is a moderately sensitive with an inhibition zone of 10 mm, Further Antirobactere, Escherichia coli and Proteus are strains that represent a high sensitivity, A zone of inhibition equal to 14.66 mm.

Keywords: Essential oil, microorganism, antibiogram, shinus molle

Procedia PDF Downloads 345
2385 Integration of an Evidence-Based Medicine Curriculum into Physician Assistant Education: Teaching for Today and the Future

Authors: Martina I. Reinhold, Theresa Bacon-Baguley

Abstract:

Background: Medical knowledge continuously evolves and to help health care providers to stay up-to-date, evidence-based medicine (EBM) has emerged as a model. The practice of EBM requires new skills of the health care provider, including directed literature searches, the critical evaluation of research studies, and the direct application of the findings to patient care. This paper describes the integration and evaluation of an evidence-based medicine course sequence into a Physician Assistant curriculum. This course sequence teaches students to manage and use the best clinical research evidence to competently practice medicine. A survey was developed to assess the outcomes of the EBM course sequence. Methodology: The cornerstone of the three-semester sequence of EBM are interactive small group discussions that are designed to introduce students to the most clinically applicable skills to identify, manage and use the best clinical research evidence to improve the health of their patients. During the three-semester sequence, the students are assigned each semester to participate in small group discussions that are facilitated by faculty with varying background and expertise. Prior to the start of the first EBM course in the winter semester, PA students complete a knowledge-based survey that was developed by the authors to assess the effectiveness of the course series. The survey consists of 53 Likert scale questions that address the nine objectives for the course series. At the end of the three semester course series, the same survey was given to all students in the program and the results from before, and after the sequence of EBM courses are compared. Specific attention is paid to overall performance of students in the nine course objectives. Results: We find that students from the Class of 2016 and 2017 consistently improve (as measured by percent correct responses on the survey tool) after the EBM course series (Class of 2016: Pre- 62% Post- 75%; Class of 2017: Pre- 61 % Post-70%). The biggest increase in knowledge was observed in the areas of finding and evaluating the evidence, with asking concise clinical questions (Class of 2016: Pre- 61% Post- 81%; Class of 2017: Pre- 61 % Post-75%) and searching the medical database (Class of 2016: Pre- 24% Post- 65%; Class of 2017: Pre- 35 % Post-66 %). Questions requiring students to analyze, evaluate and report on the available clinical evidence regarding diagnosis showed improvement, but to a lesser extend (Class of 2016: Pre- 56% Post- 77%; Class of 2017: Pre- 56 % Post-61%). Conclusions: Outcomes identified that students did gain skills which will allow them to apply EBM principles. In addition, the outcomes of the knowledge-based survey allowed the faculty to focus on areas needing improvement, specifically the translation of best evidence into patient care. To address this area, the clinical faculty developed case scenarios that were incorporated into the lecture and discussion sessions, allowing students to better connect the research studies with patient care. Students commented that ‘class discussion and case examples’ contributed most to their learning and that ‘it was helpful to learn how to develop research questions and how to analyze studies and their significance to a potential client’. As evident by the outcomes, the EBM courses achieved the goals of the course and were well received by the students. 

Keywords: evidence-based medicine, clinical education, assessment tool, physician assistant

Procedia PDF Downloads 124
2384 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 117
2383 Prevalence and Correlates of Complementary and Alternative Medicine Use among Diabetic Patients in Lebanon: A Cross-Sectional Study

Authors: Farah Naja, Mohamad Alameddine

Abstract:

Background: The difficulty of compliance to therapeutic and lifestyle management of type 2 diabetes mellitus (T2DM) encourages patients to use complementary and alternative medicine (CAM) therapies. Little is known about the prevalence and mode of CAM use among diabetics in the Eastern Mediterranean Region in general and Lebanon in particular. Objective: To assess the prevalence and modes of CAM use among patients with T2DM residing in Beirut, Lebanon. Methods: A cross-sectional survey of T2DM patients was conducted on patients recruited from two major referral centers - a public hospital and a private academic medical center in Beirut. In a face-to-face interview, participants completed a survey questionnaire comprised of three sections: socio-demographic, diabetes characteristics and types and modes of CAM use. Descriptive statistics, univariate and multivariate logistic regression analyses were utilized to assess the prevalence, mode and correlates of CAM use in the study population. The main outcome in this study (CAM use) was defined as using CAM at least once since diagnosis with T2DM. Results: A total of 333 T2DM patients completed the survey (response rate: 94.6%). Prevalence of CAM use in the study population was 38%, 95% CI (33.1-43.5). After adjustment, CAM use was significantly associated with a “married” status, a longer duration of T2DM, the presence of disease complications, and a positive family history of the disease. Folk foods and herbs were the most commonly used CAM followed by natural health products. One in five patients used CAM as an alternative to conventional treatment. Only 7 % of CAM users disclosed the CAM use to their treating physician. Health care practitioners were the least cited (7%) as influencing the choice of CAM among users. Conclusion: The use of CAM therapies among T2DM patients in Lebanon is prevalent. Decision makers and care providers must fully understand the potential risks and benefits of CAM therapies to appropriately advise their patients. Attention must be dedicated to educating T2DM patients on the importance of disclosing CAM use to their physicians especially patients with a family history of diabetes, and those using conventional therapy for a long time.

Keywords: nutritional supplements, type 2 diabetes mellitus, complementary and alternative medicine (CAM), conventional therapy

Procedia PDF Downloads 349
2382 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: brain-computer interface, speech recognition, artificial neural network, electroencephalography, EEG, wernicke area

Procedia PDF Downloads 269
2381 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks

Authors: Tesfaye Mengistu

Abstract:

Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.

Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net

Procedia PDF Downloads 111
2380 The Human Process of Trust in Automated Decisions and Algorithmic Explainability as a Fundamental Right in the Exercise of Brazilian Citizenship

Authors: Paloma Mendes Saldanha

Abstract:

Access to information is a prerequisite for democracy while also guiding the material construction of fundamental rights. The exercise of citizenship requires knowing, understanding, questioning, advocating for, and securing rights and responsibilities. In other words, it goes beyond mere active electoral participation and materializes through awareness and the struggle for rights and responsibilities in the various spaces occupied by the population in their daily lives. In times of hyper-cultural connectivity, active citizenship is shaped through ethical trust processes, most often established between humans and algorithms. Automated decisions, so prevalent in various everyday situations, such as purchase preference predictions, virtual voice assistants, reduction of accidents in autonomous vehicles, content removal, resume selection, etc., have already found their place as a normalized discourse that sometimes does not reveal or make clear what violations of fundamental rights may occur when algorithmic explainability is lacking. In other words, technological and market development promotes a normalization for the use of automated decisions while silencing possible restrictions and/or breaches of rights through a culturally modeled, unethical, and unexplained trust process, which hinders the possibility of the right to a healthy, transparent, and complete exercise of citizenship. In this context, the article aims to identify the violations caused by the absence of algorithmic explainability in the exercise of citizenship through the construction of an unethical and silent trust process between humans and algorithms in automated decisions. As a result, it is expected to find violations of constitutionally protected rights such as privacy, data protection, and transparency, as well as the stipulation of algorithmic explainability as a fundamental right in the exercise of Brazilian citizenship in the era of virtualization, facing a threefold foundation called trust: culture, rules, and systems. To do so, the author will use a bibliographic review in the legal and information technology fields, as well as the analysis of legal and official documents, including national documents such as the Brazilian Federal Constitution, as well as international guidelines and resolutions that address the topic in a specific and necessary manner for appropriate regulation based on a sustainable trust process for a hyperconnected world.

Keywords: artificial intelligence, ethics, citizenship, trust

Procedia PDF Downloads 62
2379 Artificial Intelligence in Ethiopian Universities: The Influence of Technological Readiness, Acceptance, Perceived Risk, and Trust on Implementation - An Integrative Research Approach

Authors: Merih Welay Welesilassie

Abstract:

Understanding educators' readiness to incorporate AI tools into their teaching methods requires comprehensively examining the influencing factors. This understanding is crucial, given the potential of these technologies to personalise learning experiences, improve instructional effectiveness, and foster innovative pedagogical approaches. This study evaluated factors affecting teachers' adoption of AI tools in their English language instruction by extending the Technology Acceptance Model (TAM) to encompass digital readiness support, perceived risk, and trust. A cross-sectional quantitative survey was conducted with 128 English language teachers, supplemented by qualitative data collection from 15 English teachers. The structural mode analysis indicated that implementing AI tools in Ethiopian higher education was notably influenced by digital readiness support, perceived ease of use, perceived usefulness, perceived risk, and trust. Digital readiness support positively impacted perceived ease of use, usefulness, and trust while reducing safety and privacy risks. Perceived ease of use positively correlated with perceived usefulness but negatively influenced trust. Furthermore, perceived usefulness strengthened trust in AI tools, while perceived safety and privacy risks significantly undermined trust. Trust was crucial in increasing educators' willingness to adopt AI technologies. The qualitative analysis revealed that the teachers exhibited strong content and pedagogical knowledge but needed more technology-related knowledge. Moreover, It was found that the teachers did not utilise digital tools to teach English. The study identified several obstacles to incorporating digital tools into English lessons, such as insufficient digital infrastructure, a shortage of educational resources, inadequate professional development opportunities, and challenging policies and governance. The findings provide valuable guidance for educators, inform policymakers about creating supportive digital environments, and offer a foundation for further investigation into technology adoption in educational settings in Ethiopia and similar contexts.

Keywords: digital readiness support, AI acceptance, risk, trust

Procedia PDF Downloads 14
2378 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 271
2377 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 86
2376 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing

Procedia PDF Downloads 178
2375 Aromatic Medicinal Plant Classification Using Deep Learning

Authors: Tsega Asresa Mengistu, Getahun Tigistu

Abstract:

Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.

Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network

Procedia PDF Downloads 438