Search results for: ambient intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2095

Search results for: ambient intelligence

445 Efficiency and Reliability Analysis of SiC-Based and Si-Based DC-DC Buck Converters in Thin-Film PV Systems

Authors: Elaid Bouchetob, Bouchra Nadji

Abstract:

This research paper compares the efficiency and reliability (R(t)) of SiC-based and Si-based DC-DC buck converters in thin layer PV systems with an AI-based MPPT controller. Using Simplorer/Simulink simulations, the study assesses their performance under varying conditions. Results show that the SiC-based converter outperforms the Si-based one in efficiency and cost-effectiveness, especially in high temperature and low irradiance conditions. It also exhibits superior reliability, particularly at high temperature and voltage. Reliability calculation (R(t)) is analyzed to assess system performance over time. The SiC-based converter demonstrates better reliability, considering factors like component failure rates and system lifetime. The research focuses on the buck converter's role in charging a Lithium battery within the PV system. By combining the SiC-based converter and AI-based MPPT controller, higher charging efficiency, improved reliability, and cost-effectiveness are achieved. The SiC-based converter proves superior under challenging conditions, emphasizing its potential for optimizing PV system charging. These findings contribute insights into the efficiency, reliability, and reliability calculation of SiC-based and Si-based converters in PV systems. SiC technology's advantages, coupled with advanced control strategies, promote efficient and sustainable energy storage using Lithium batteries. The research supports PV system design and optimization for reliable renewable energy utilization.

Keywords: efficiency, reliability, artificial intelligence, sic device, thin layer, buck converter

Procedia PDF Downloads 47
444 Effect of Phonological Complexity in Children with Specific Language Impairment

Authors: Irfana M., Priyandi Kabasi

Abstract:

Children with specific language impairment (SLI) have difficulty acquiring and using language despite having all the requirements of cognitive skills to support language acquisition. These children have normal non-verbal intelligence, hearing, and oral-motor skills, with no history of social/emotional problems or significant neurological impairment. Nevertheless, their language acquisition lags behind their peers. Phonological complexity can be considered to be the major factor that causes the inaccurate production of speech in this population. However, the implementation of various ranges of complex phonological stimuli in the treatment session of SLI should be followed for a better prognosis of speech accuracy. Hence there is a need to study the levels of phonological complexity. The present study consisted of 7 individuals who were diagnosed with SLI and 10 developmentally normal children. All of them were Hindi speakers with both genders and their age ranged from 4 to 5 years. There were 4 sets of stimuli; among them were minimal contrast vs maximal contrast nonwords, minimal coarticulation vs maximal coarticulation nonwords, minimal contrast vs maximal contrast words and minimal coarticulation vs maximal coarticulation words. Each set contained 10 stimuli and participants were asked to repeat each stimulus. Results showed that production of maximal contrast was significantly accurate, followed by minimal coarticulation, minimal contrast and maximal coarticulation. A similar trend was shown for both word and non-word categories of stimuli. The phonological complexity effect was evident in the study for each participant group. Moreover, present study findings can be implemented for the management of SLI, specifically for the selection of stimuli.

Keywords: coarticulation, minimal contrast, phonological complexity, specific language impairment

Procedia PDF Downloads 128
443 Production of Bricks Using Mill Waste and Tyre Crumbs at a Low Temperature by Alkali-Activation

Authors: Zipeng Zhang, Yat C. Wong, Arul Arulrajah

Abstract:

Since automobiles became widely popular around the early 20th century, end-of-life tyres have been one of the major types of waste humans encounter. Every minute, there are considerable quantities of tyres being disposed of around the world. Most end-of-life tyres are simply landfilled or simply stockpiled, other than recycling. To address the potential issues caused by tyre waste, incorporating it into construction materials can be a possibility. This research investigated the viability of manufacturing bricks using mill waste and tyre crumb by alkali-activation at a relatively low temperature. The mill waste was extracted from a brick factory located in Melbourne, Australia, and the tyre crumbs were supplied by a local recycling company. As the main precursor, the mill waste was activated by the alkaline solution, which was comprised of sodium hydroxide (8m) and sodium silicate (liquid). The introduction ratio of alkaline solution (relative to the solid weight) and the weight ratio between sodium hydroxide and sodium silicate was fixed at 20 wt.% and 1:1, respectively. The tyre crumb was introduced to substitute part of the mill waste at four ratios by weight, namely 0, 5, 10 and 15%. The mixture of mill waste and tyre crumbs were firstly dry-mixed for 2 min to ensure the homogeneity, followed by a 2.5-min wet mixing after adding the solution. The ready mixture subsequently was press-moulded into blocks with the size of 109 mm in length, 112.5 mm in width and 76 mm in height. The blocks were cured at 50°C with 95% relative humidity for 2 days, followed by a 110°C oven-curing for 1 day. All the samples were then placed under the ambient environment until the age of 7 and 28 days for testing. A series of tests were conducted to evaluate the linear shrinkage, compressive strength and water absorption of the samples. In addition, the microstructure of the samples was examined via the scanning electron microscope (SEM) test. The results showed the highest compressive strength was 17.6 MPa, found in the 28-day-old group using 5 wt.% tyre crumbs. Such strength has been able to satisfy the requirement of ASTM C67. However, the increasing addition of tyre crumb weakened the compressive strength of samples. Apart from the strength, the linear shrinkage and water absorption of all the groups can meet the requirements of the standard. It is worth noting that the use of tyre crumbs tended to decrease the shrinkage and even caused expansion when the tyre content was over 15 wt.%. The research also found that there was a significant reduction in compressive strength for the samples after water absorption tests. In conclusion, the tyre crumbs have the potential to be used as a filler material in brick manufacturing, but more research needs to be done to tackle the durability problem in the future.

Keywords: bricks, mill waste, tyre crumbs, waste recycling

Procedia PDF Downloads 115
442 Next-Gen Solutions: How Generative AI Will Reshape Businesses

Authors: Aishwarya Rai

Abstract:

This study explores the transformative influence of generative AI on startups, businesses, and industries. We will explore how large businesses can benefit in the area of customer operations, where AI-powered chatbots can improve self-service and agent effectiveness, greatly increasing efficiency. In marketing and sales, generative AI could transform businesses by automating content development, data utilization, and personalization, resulting in a substantial increase in marketing and sales productivity. In software engineering-focused startups, generative AI can streamline activities, significantly impacting coding processes and work experiences. It can be extremely useful in product R&D for market analysis, virtual design, simulations, and test preparation, altering old workflows and increasing efficiency. Zooming into the retail and CPG industry, industry findings suggest a 1-2% increase in annual revenues, equating to $400 billion to $660 billion. By automating customer service, marketing, sales, and supply chain management, generative AI can streamline operations, optimizing personalized offerings and presenting itself as a disruptive force. While celebrating economic potential, we acknowledge challenges like external inference and adversarial attacks. Human involvement remains crucial for quality control and security in the era of generative AI-driven transformative innovation. This talk provides a comprehensive exploration of generative AI's pivotal role in reshaping businesses, recognizing its strategic impact on customer interactions, productivity, and operational efficiency.

Keywords: generative AI, digital transformation, LLM, artificial intelligence, startups, businesses

Procedia PDF Downloads 58
441 Assessment of Physical Activity and Sun Exposure of Saudi Patients with Type 2 Diabetes Mellitus in Ramadan and Non-Ramadan Periods

Authors: Abdullah S. Alghamdi, Khaled Alghamdi, Richard O. Jenkins, Parvez I. Haris

Abstract:

Background: Physical activity is an important factor in the treatment and prevention of type 2 diabetes mellitus (T2DM). Reduction in HbA1c level, an important diabetes biomarker, was reported in patients who increased their daily physical activity. Although the ambient temperature was reported to be positively correlated to a negative impact on health and increase the incidences of diabetes, the exposure to bright sunlight was recently found to be associated with enhanced insulin sensitivity and improved beta-cell function. How Ramadan alters physical activity, and especially sunlight exposure, has not been adequately investigated. Aim: This study aimed to assess the physical activity and sun exposure of Saudis with T2DM over different periods (before, during, and after Ramadan) and related this to HbA1c levels. Methods: This study recruited 82 Saudis with T2DM, who chose to fast during Ramadan, from the Endocrine and Diabetic Centre of Al Iman General Hospital, Riyadh, Saudi Arabia. Ethical approvals for this study were obtained from De Montfort University and Saudi Ministry of Health. Physical activity and sun exposure were assessed by a self-administered questionnaire. Physical activity was estimated using the International Physical Activity Questionnaire (IPAQ), while the sun exposure was assessed by asking the patients about their hours per week of direct exposure to the sun, and daily hours spent outdoors. Blood samples were collected in each period for measuring HbA1c. Results: Low physical activity was observed in more than 60% of the patients, with no significant changes between periods. There were no significant variances between periods in the daily hours spent outdoors and the total number of weekly hours of direct exposure to the sun. The majority of patients reported only few hours of exposure to the sun (1h or less per week) and time spent outdoors (1h or less per day). The mean HbA1c significantly changed between periods (P = 0.001), with lowest level during Ramadan. There were significant differences in the mean HbA1c between the groups for the level of physical activity (P < 0.001), with significant lower mean HbA1c in the higher-level group. There were no significant variances in the mean of HbA1c between the groups for the daily hours spent outdoors. The mean HbA1c of the patients, who reported never in their total weekly hours of exposure to the sun, was significantly lower than the mean HbA1c of those who reported 1 hour or less (P = 0.001). Conclusion: Physical inactivity was prevalent among the study population with very little exposure to the sun or time spent outdoors. Higher level of physical activity was associated with lower mean HbA1c levels. Encouraging T2DM patients to achieve the recommended levels of physical activity may help them to obtain greater benefits of Ramadan fasting, such as reducing their HbA1c levels. The impact of low direct exposure to the sun and the time spent outdoors needs to be further investigated in both healthy and diabetic patients.

Keywords: diabetes, fasting, physical activity, sunlight, Ramadan

Procedia PDF Downloads 150
440 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 230
439 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars

Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea

Abstract:

Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.

Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag

Procedia PDF Downloads 314
438 Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring

Authors: Jean-Francois Rajotte, Martin Sotir, Frank Gouineau

Abstract:

The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution.

Keywords: traffic monitoring, deep learning, image annotation, vehicles, roads, artificial intelligence, real-time systems

Procedia PDF Downloads 183
437 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling

Authors: Sushma Ghogale

Abstract:

With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.

Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis

Procedia PDF Downloads 90
436 Artificial Intelligent-Based Approaches for Task ‎Offloading, ‎Resource ‎Allocation and Service ‎Placement of ‎Internet of Things ‎Applications: State of the Art

Authors: Fatima Z. Cherhabil, Mammar Sedrati, Sonia-Sabrina Bendib‎

Abstract:

In order to support the continued growth, critical latency of ‎IoT ‎applications, and ‎various obstacles of traditional data centers, ‎mobile edge ‎computing (MEC) has ‎emerged as a promising solution that extends cloud data-processing and decision-making to edge devices. ‎By adopting a MEC structure, IoT applications could be executed ‎locally, on ‎an edge server, different fog nodes, or distant cloud ‎data centers. However, we are ‎often ‎faced with wanting to optimize conflicting criteria such as ‎minimizing energy ‎consumption of limited local capabilities (in terms of CPU, RAM, storage, bandwidth) of mobile edge ‎devices and trying to ‎keep ‎high performance (reducing ‎response time, increasing throughput and service availability) ‎at the same ‎time‎. Achieving one goal may affect the other, making task offloading (TO), ‎resource allocation (RA), and service placement (SP) complex ‎processes. ‎It is a nontrivial multi-objective optimization ‎problem ‎to study the trade-off between conflicting criteria. ‎The paper provides a survey on different TO, SP, and RA recent multi-‎objective optimization (MOO) approaches used in edge computing environments, particularly artificial intelligent (AI) ones, to satisfy various objectives, constraints, and dynamic conditions related to IoT applications‎.

Keywords: mobile edge computing, multi-objective optimization, artificial ‎intelligence ‎approaches, task offloading, resource allocation, ‎ service placement

Procedia PDF Downloads 99
435 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis

Procedia PDF Downloads 75
434 Predicting the Compressive Strength of Geopolymer Concrete Using Machine Learning Algorithms: Impact of Chemical Composition and Curing Conditions

Authors: Aya Belal, Ahmed Maher Eltair, Maggie Ahmed Mashaly

Abstract:

Geopolymer concrete is gaining recognition as a sustainable alternative to conventional Portland Cement concrete due to its environmentally friendly nature, which is a key goal for Smart City initiatives. It has demonstrated its potential as a reliable material for the design of structural elements. However, the production of Geopolymer concrete is hindered by batch-to-batch variations, which presents a significant challenge to the widespread adoption of Geopolymer concrete. To date, Machine learning has had a profound impact on various fields by enabling models to learn from large datasets and predict outputs accurately. This paper proposes an integration between the current drift to Artificial Intelligence and the composition of Geopolymer mixtures to predict their mechanical properties. This study employs Python software to develop machine learning model in specific Decision Trees. The research uses the percentage oxides and the chemical composition of the Alkali Solution along with the curing conditions as the input independent parameters, irrespective of the waste products used in the mixture yielding the compressive strength of the mix as the output parameter. The results showed 90 % agreement of the predicted values to the actual values having the ratio of the Sodium Silicate to the Sodium Hydroxide solution being the dominant parameter in the mixture.

Keywords: decision trees, geopolymer concrete, machine learning, smart cities, sustainability

Procedia PDF Downloads 71
433 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 408
432 Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier

Authors: Kemka Ihemelandu, Chukwuemeka Ihemelandu

Abstract:

Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma.

Keywords: bias, augmentation, melanoma, convolutional neural network

Procedia PDF Downloads 198
431 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking

Authors: Jonas Colin

Abstract:

Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.

Keywords: chatbot, GPT 3.5, metacognition, symbiose

Procedia PDF Downloads 51
430 First Systematic Review on Aerosol Bound Water: Exploring the Existing Knowledge Domain Using the CiteSpace Software

Authors: Kamila Widziewicz-Rzonca

Abstract:

The presence of PM bound water as an integral chemical compound of suspended aerosol particles (PM) has become one of the hottest issues in recent years. The UN climate summits on climate change (COP24) indicate that PM of anthropogenic origin (released mostly from coal combustion) is directly responsible for climate change. Chemical changes at the particle-liquid (water) interface determine many phenomena occurring in the atmosphere such as visibility, cloud formation or precipitation intensity. Since water-soluble particles such as nitrates, sulfates, or sea salt easily become cloud condensation nuclei, they affect the climate for example by increasing cloud droplet concentration. Aerosol water is a master component of atmospheric aerosols and a medium that enables all aqueous-phase reactions occurring in the atmosphere. Thanks to a thorough bibliometric analysis conducted using CiteSpace Software, it was possible to identify past trends and possible future directions in measuring aerosol-bound water. This work, in fact, doesn’t aim at reviewing the existing literature in the related topic but is an in-depth bibliometric analysis exploring existing gaps and new frontiers in the topic of PM-bound water. To assess the major scientific areas related to PM-bound water and clearly define which among those are the most active topics we checked Web of Science databases from 1996 till 2018. We give an answer to the questions: which authors, countries, institutions and aerosol journals to the greatest degree influenced PM-bound water research? Obtained results indicate that the paper with the greatest citation burst was Tang In and Munklewitz H.R. 'water activities, densities, and refractive indices of aqueous sulfates and sodium nitrate droplets of atmospheric importance', 1994. The largest number of articles in this specific field was published in atmospheric chemistry and physics. An absolute leader in the quantity of publications among all research institutions is the National Aeronautics Space Administration (NASA). Meteorology and atmospheric sciences is a category with the most studies in this field. A very small number of studies on PM-bound water conduct a quantitative measurement of its presence in ambient particles or its origin. Most articles rather point PM-bound water as an artifact in organic carbon and ions measurements without any chemical analysis of its contents. This scientometric study presents the current and most actual literature regarding particulate bound water.

Keywords: systematic review, aerosol-bound water, PM-bound water, CiteSpace, knowledge domain

Procedia PDF Downloads 116
429 Counter-Terrorism Policies in the Wider Black Sea Region: Evaluating the Robustness of Constantza Port under Potential Terror Attacks

Authors: A. V. Popa, C. Barna, V. Mihalache

Abstract:

Being the largest port at the Black Sea and functioning as a civil and military nodal point between Europe and Asia, Constantza Port has become a potential target on the terrorist international agenda. The authors use qualitative research based on both face-to-face and online semi-structured interviews with relevant stakeholders (top decision-makers in the Romanian Naval Authority, Romanian Maritime Training Centre, National Company "Maritime Ports Administration" and military staff) in order to detect potential vulnerabilities which might be exploited by terrorists in the case of Constantza Port. Likewise, this will enable bringing together the experts’ opinions on potential mitigation measures. Subsequently, this paper formulates various counter-terrorism policies to enhance the robustness of Constantza Port under potential terror attacks and connects them with the attributions in the field of critical infrastructure protection conferred by the law to the lead national authority for preventing and countering terrorism, namely the Romanian Intelligence Service. Extending the national counterterrorism efforts to an international level, the authors propose the establishment – among the experts of the NATO member states of the Wider Black Sea Region – of a platform for the exchange of know-how and best practices in the field of critical infrastructure protection.

Keywords: Constantza Port, counter-terrorism policies, critical infrastructure protection, security, Wider Black Sea Region

Procedia PDF Downloads 285
428 Anomaly Detection in Financial Markets Using Tucker Decomposition

Authors: Salma Krafessi

Abstract:

The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.

Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models

Procedia PDF Downloads 47
427 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators

Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari

Abstract:

NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.

Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force

Procedia PDF Downloads 184
426 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO

Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky

Abstract:

The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.

Keywords: aeronautics, big data, data processing, machine learning, S1000D

Procedia PDF Downloads 126
425 Results of Longitudinal Assessments of Very Low Birth Weight and Extremely Low Birth Weight Infants

Authors: Anett Nagy, Anna Maria Beke, Rozsa Graf, Magda Kalmar

Abstract:

Premature birth involves developmental risks – the earlier the baby is born and the lower its birth weight, the higher the risks. The developmental outcomes for immature, low birth weight infants are hard to predict. Our aim is to identify the factors influencing infant and preschool-age development in very low birth weight (VLBW) and extremely low birth weight (ELBW) preterms. Sixty-one subjects participated in our longitudinal study, which consisted of thirty VLBW and thirty-one ELBW children. The psychomotor development of the infants was assessed using the Brunet-Lezine Developmental Scale at the corrected ages of one and two years; then at three years of age, they were tested with the WPPSI-IV IQ test. Birth weight, gestational age, perinatal complications, gender, and maternal education, were added to the data analysis as independent variables. According to our assessments, our subjects as a group scored in the average range in each subscale of the Brunet-Lezine Developmental Scale. The scores were the lowest in language at both measurement points. The children’s performances improved between one and two years of age, particularly in the domain of coordination. At three years of age the mean IQ test results, although still in the average range, were near the low end of it in each index. The ELBW preterms performed significantly poorer in Perceptual Reasoning Index. The developmental level at two years better predicted the IQ than that at one year. None of the measures distinguished the genders.

Keywords: preterm, extremely low birth-weight, perinatal complication, psychomotor development, intelligence, follow-up

Procedia PDF Downloads 235
424 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image

Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa

Abstract:

A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.

Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever

Procedia PDF Downloads 100
423 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations

Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay

Abstract:

Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.

Keywords: machining, milling operation, tool condition monitoring, tool wear prediction

Procedia PDF Downloads 292
422 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study

Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker

Abstract:

In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.

Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning

Procedia PDF Downloads 131
421 Business and Psychological Principles Integrated into Automated Capital Investment Systems through Mathematical Algorithms

Authors: Cristian Pauna

Abstract:

With few steps away from the 2020, investments in financial markets is a common activity nowadays. In the electronic trading environment, the automated investment software has become a major part in the business intelligence system of any modern financial company. The investment decisions are assisted and/or made automatically by computers using mathematical algorithms today. The complexity of these algorithms requires computer assistance in the investment process. This paper will present several investment strategies that can be automated with algorithmic trading for Deutscher Aktienindex DAX30. It was found that, based on several price action mathematical models used for high-frequency trading some investment strategies can be optimized and improved for automated investments with good results. This paper will present the way to automate these investment decisions. Automated signals will be built using all of these strategies. Three major types of investment strategies were found in this study. The types are separated by the target length and by the exit strategy used. The exit decisions will be also automated and the paper will present the specificity for each investment type. A comparative study will be also included in this paper in order to reveal the differences between strategies. Based on these results, the profit and the capital exposure will be compared and analyzed in order to qualify the investment methodologies presented and to compare them with any other investment system. As conclusion, some major investment strategies will be revealed and compared in order to be considered for inclusion in any automated investment system.

Keywords: Algorithmic trading, automated investment systems, limit conditions, trading principles, trading strategies

Procedia PDF Downloads 182
420 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling

Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo

Abstract:

Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.

Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield

Procedia PDF Downloads 432
419 Immobilization of Horseradish Peroxidase onto Bio-Linked Magnetic Particles with Allium Cepa Peel Water Extracts

Authors: Mirjana Petronijević, Sanja Panić, Aleksandra Cvetanović, Branko Kordić, Nenad Grba

Abstract:

Enzyme peroxidases are biological catalysts and play a major role in phenolic wastewater treatments and other environmental applications. The most studied species from the peroxidases family is horseradish peroxidase (HRP). In environmental processes, HRP could be used in its free or immobilized form. Enzyme immobilization onto solid support is performed to improve the enzyme properties, prolong its lifespan and operational stability and allow its reuse in industrial applications. One of the enzyme supports of a newer generation is magnetic particles (MPs). Fe₃O₄ MPs are the most widely pursued immobilization of enzymes owing to their remarkable advantages of biocompatibility and non-toxicity. Also, MPs can be easily separated and recovered from the water by applying an external magnetic field. On the other hand, metals and metal oxides are not suitable for the covalent binding of enzymes, so it is necessary to perform their surface modification. Fe₃O₄ MPs functionalization could be performed during the process of their synthesis if it takes place in the presence of plant extracts. Extracts of plant material, such as wild plants, herbs, even waste materials of the food and agricultural industry (bark, shell, leaves, peel), are rich in various bioactive components such as polyphenols, flavonoids, sugars, etc. When the synthesis of magnetite is performed in the presence of plant extracts, bioactive components are incorporated into the surface of the magnetite, thereby affecting its functionalization. In this paper, the suitability of bio-magnetite as solid support for covalent immobilization of HRP across glutaraldehyde was examined. The activity of immobilized HRP at different pH values (4-9) and temperatures (20-80°C) and reusability were examined. Bio-MP was synthesized by co-precipitation method from Fe(II) and Fe(III) sulfate salts in the presence of water extract of the Allium cepa peel. The water extract showed 81% of antiradical potential (according to DPPH assay), which is connected with the high content of polyphenols. According to the FTIR analysis, the bio-magnetite contains oxygen functional groups (-OH, -COOH, C=O) suitable for binding to glutaraldehyde, after which the enzyme is covalently immobilized. The immobilized enzyme showed high activity at ambient temperature and pH 7 (30 U/g) and retained ≥ 80% of its activity at a wide range of pH (5-8) and temperature (20-50°C). The HRP immobilized onto bio-MPs showed remarkable stability towards temperature and pH variations compared to the free enzyme form. On the other hand, immobilized HRP showed low reusability after the first washing cycle enzyme retains 50% of its activity, while after the third washing cycle retains only 22%.

Keywords: bio-magnetite, enzyme immobilization, water extracts, environmental protection

Procedia PDF Downloads 204
418 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 261
417 Light-Weight Network for Real-Time Pose Estimation

Authors: Jianghao Hu, Hongyu Wang

Abstract:

The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).

Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone

Procedia PDF Downloads 142
416 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050

Authors: Ali Hashemifarzad, Jens Zum Hingst

Abstract:

The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.

Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production

Procedia PDF Downloads 122