Search results for: Zahra Kazemi Saleh
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 497

Search results for: Zahra Kazemi Saleh

17 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 33
16 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems

Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia

Abstract:

The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.

Keywords: cloud computing, data management, multi-tenancy, requirements, security

Procedia PDF Downloads 156
15 Zn-, Mg- and Ni-Al-NO₃ Layered Double Hydroxides Intercalated by Nitrate Anions for Treatment of Textile Wastewater

Authors: Fatima Zahra Mahjoubi, Abderrahim Khalidi, Mohamed Abdennouri, Omar Cherkaoui, Noureddine Barka

Abstract:

Industrial effluents are one of the major causes of environmental pollution, especially effluents discharged from various dyestuff manufactures, plastic, and paper making industries. These effluents can give rise to certain hazards and environmental problems for their highly colored suspended organic solid. Dye effluents are not only aesthetic pollutants, but coloration of water by the dyes may affect photochemical activities in aquatic systems by reducing light penetration. It has been also reported that several commonly used dyes are carcinogenic and mutagenic for aquatic organisms. Therefore, removing dyes from effluents is of significant importance. Many adsorbent materials have been prepared in the removal of dyes from wastewater, including anionic clay or layered double hydroxyde. The zinc/aluminium (Zn-AlNO₃), magnesium/aluminium (Mg-AlNO₃) and nickel/aluminium (Ni-AlNO₃) layered double hydroxides (LDHs) were successfully synthesized via coprecipitation method. Samples were characterized by XRD, FTIR, TGA/DTA, TEM and pHPZC analysis. XRD patterns showed a basal spacing increase in the order of Zn-AlNO₃ (8.85Å)> Mg-AlNO₃ (7.95Å)> Ni-AlNO₃ (7.82Å). FTIR spectrum confirmed the presence of nitrate anions in the LDHs interlayer. The TEM images indicated that the Zn-AlNO3 presents circular to shaped particles with an average particle size of approximately 30 to 40 nm. Small plates assigned to sheets with hexagonal form were observed in the case of Mg-AlNO₃. Ni-AlNO₃ display nanostructured sphere in diameter between 5 and 10 nm. The LDHs were used as adsorbents for the removal of methyl orange (MO), as a model dye and for the treatment of an effluent generated by a textile factory. Adsorption experiments for MO were carried out as function of solution pH, contact time and initial dye concentration. Maximum adsorption was occurred at acidic solution pH. Kinetic data were tested using pseudo-first-order and pseudo-second-order kinetic models. The best fit was obtained with the pseudo-second-order kinetic model. Equilibrium data were correlated to Langmuir and Freundlich isotherm models. The best conditions for color and COD removal from textile effluent sample were obtained at lower values of pH. Total color removal was obtained with Mg-AlNO₃ and Ni-AlNO₃ LDHs. Reduction of COD to limits authorized by Moroccan standards was obtained with 0.5g/l LDHs dose.

Keywords: chemical oxygen demand, color removal, layered double hydroxides, textile wastewater treatment

Procedia PDF Downloads 354
14 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco

Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi

Abstract:

In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.

Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco

Procedia PDF Downloads 461
13 Knowledge State of Medical Students in Morocco Regarding Metabolic Dysfunction Associated with Non-alcoholic Fatty Liver Disease (MASLD)

Authors: Elidrissi Laila, El Rhaoussi Fatima-Zahra, Haddad Fouad, Tahiri Mohamed, Hliwa Wafaa, Bellabah Ahmed, Badre Wafaa

Abstract:

Introduction: Metabolic Dysfunction Associated with Non-Alcoholic Fatty Liver Disease (MASLD), formerly known as Non-Alcoholic Fatty Liver Disease (NAFLD), is the leading cause of chronic liver disease. The cardiometabolic risk factors associated with MASLD represent common health issues and significant public health challenges. Medical students, being active participants in the healthcare system and a young demographic, are particularly relevant for understanding this entity to prevent its occurrence on a personal and collective level. The objective of our study is to assess the level of knowledge among medical students regarding MASLD, its risk factors, and its long-term consequences. Materials and Methods: We conducted a descriptive cross-sectional study using an anonymous questionnaire distributed through social media over a period of 2 weeks. Medical students from various faculties in Morocco answered 22 questions about MASLD, its etiological factors, diagnosis, complications, and principles of treatment. All responses were analyzed using the Jamovi software. Results: A total of 124 students voluntarily provided complete responses. 59% of our participants were in their 3rd year, with a median age of 21 years. Among the respondents, 27% were overweight, obese, or diabetic. 83% correctly answered more than half of the questions, and 77% believed they knew about MASLD. However, 84% of students were unaware that MASLD is the leading cause of chronic liver disease, and 12% even considered it a rare condition. Regarding etiological factors, overweight and obesity were mentioned in 93% of responses, and type 2 diabetes in 84%. 62% of participants believed that type 1 diabetes could not be implicated in MASLD. For 83 students, MASLD was considered a diagnosis of exclusion, while 41 students believed that a biopsy was mandatory for diagnosis. 12% believed that MASLD did not lead to long-term complications, and 44% were unaware that MASLD could progress to hepatocellular carcinoma. Regarding treatment, 85% included weight loss, and 19% did not consider diabetes management as a therapeutic approach for MASLD. At the end of the questionnaire, 89% of the students expressed a desire to learn more about MASLD and were invited to access an informative sheet through a hyperlink. Conclusion: MASLD represents a significant public health concern due to the prevalence of its risk factors, notably the obesity pandemic, which is widespread among the young population. There is a need for awareness about the seriousness of this emerging and long-underestimated condition among young future physicians.

Keywords: MASLD, medical students, obesity, diabetes

Procedia PDF Downloads 74
12 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly

Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto

Abstract:

Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.

Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau

Procedia PDF Downloads 76
11 An Analysis of Fundamentals and Factors of Positive Thinking and the Ways of Its Emergence in Islam and the New Testament

Authors: Zahra Mohagheghian, Fatema Agharebparast

Abstract:

The comparative study of religions is one of the ways which provides peace and makes the believers of religions closer together. Finding the common notions could be a foundation for the dialog among the monotheistic religions and a background to eliminate the misunderstandings and to reach common point of views. The cornerstone of all the common efforts of the believers of the religions is to reach an understanding for building a better world where true peace is established. So, the article seeks to verify the notion of positive thinking in the religious resources of Islam and Christianity. In order to understand the foundations of the religious teachings and to provide a better understanding among the believers, then, the article tries to discover the common fundamentals and the opposing points about the positive thinking in these two religions. We first try to explain the notion of positive thinking in Islam and Christianity and then offer recommended ways in both religions to create and to strengthen this way of thinking. As the different parts of the New Testament is not theologically homogeneous, this collection has been verified and explained in four different parts: Three Gospels (Matthew, Mark and Luke), John's thoughts, thoughts and ideas of Paul and finally the Christian sects . The findings of the survey show that the notion of positive thinking in the monotheistic religions of Islam and Christianity can be traced back by the keyword "hope". It is only the hope which could finally create the soul of positive attitude and thinking inside the humankind. This hope is accompanied by the prospect and causes the humankind to work hard to reach their goals. However, there are some opposing points in these two religions about the basic foundation of this true hope. From the Quran viewpoint, the main foundation of the hope is God and the human is obliged to follow his worldly goals in accordance with this foundation as well as faith to God and avoidance of committing sins. On the other hand, the basic foundation of hope in the Three Gospels (Matthew, Mark and Luke) and the teachings of Paul is the promise of a coming Kingdom. Although there are some opposing views about the meaning of this as well as the ways to attain this hope, this hope is generally related to the purpose of human life and afterlife. The Christ, in the John's thoughts, is the source of hope and everybody, believing in God, must also have hope for Jesus Christ. Effects and functions of such hope are strengthening the spirit of love and kindness to others. Hence, in Christianity, the hope and positive thinking about the future, along with good deeds, reflects different viewpoints. On the other hand, in Quran, this is faith to God and fulfilling the Sharia orders which ignite and strengthen this hope and way of thinking. This is the base that continues nowadays with Vilāya and the love for Ahlulbeit in the Shiite views.

Keywords: God, new testament, positive thinking, Quran

Procedia PDF Downloads 453
10 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 65
9 Response Surface Methodology for the Optimization of Radioactive Wastewater Treatment with Chitosan-Argan Nutshell Beads

Authors: Fatima Zahra Falah, Touria El. Ghailassi, Samia Yousfi, Ahmed Moussaif, Hasna Hamdane, Mouna Latifa Bouamrani

Abstract:

The management and treatment of radioactive wastewater pose significant challenges to environmental safety and public health. This study presents an innovative approach to optimizing radioactive wastewater treatment using a novel biosorbent: chitosan-argan nutshell beads. By employing Response Surface Methodology (RSM), we aimed to determine the optimal conditions for maximum removal efficiency of radioactive contaminants. Chitosan, a biodegradable and non-toxic biopolymer, was combined with argan nutshell powder to create composite beads. The argan nutshell, a waste product from argan oil production, provides additional adsorption sites and mechanical stability to the biosorbent. The beads were characterized using Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), and X-ray Diffraction (XRD) to confirm their structure and composition. A three-factor, three-level Box-Behnken design was utilized to investigate the effects of pH (3-9), contact time (30-150 minutes), and adsorbent dosage (0.5-2.5 g/L) on the removal efficiency of radioactive isotopes, primarily focusing on cesium-137. Batch adsorption experiments were conducted using synthetic radioactive wastewater with known concentrations of these isotopes. The RSM analysis revealed that all three factors significantly influenced the adsorption process. A quadratic model was developed to describe the relationship between the factors and the removal efficiency. The model's adequacy was confirmed through analysis of variance (ANOVA) and various diagnostic plots. Optimal conditions for maximum removal efficiency were pH 6.8, a contact time of 120 minutes, and an adsorbent dosage of 0.8 g/L. Under these conditions, the experimental removal efficiency for cesium-137 was 94.7%, closely matching the model's predictions. Adsorption isotherms and kinetics were also investigated to elucidate the mechanism of the process. The Langmuir isotherm and pseudo-second-order kinetic model best described the adsorption behavior, indicating a monolayer adsorption process on a homogeneous surface. This study demonstrates the potential of chitosan-argan nutshell beads as an effective and sustainable biosorbent for radioactive wastewater treatment. The use of RSM allowed for the efficient optimization of the process parameters, potentially reducing the time and resources required for large-scale implementation. Future work will focus on testing the biosorbent's performance with real radioactive wastewater samples and investigating its regeneration and reusability for long-term applications.

Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology

Procedia PDF Downloads 35
8 Improvement of Oxidative Stability of Edible Oil by Microencapsulation Using Plant Proteins

Authors: L. Le Priol, A. Nesterenko, K. El Kirat, K. Saleh

Abstract:

Introduction and objectives: Polyunsaturated fatty acids (PUFAs) omega-3 and omega-6 are widely recognized as being beneficial to the health and normal growth. Unfortunately, due to their highly unsaturated nature, these molecules are sensitive to oxidation and thermic degradation leading to the production of toxic compounds and unpleasant flavors and smells. Hence, it is necessary to find out a suitable way to protect them. Microencapsulation by spray-drying is a low-cost encapsulation technology and most commonly used in the food industry. Many compounds can be used as wall materials, but there is a growing interest in the use of biopolymers, such as proteins and polysaccharides, over the last years. The objective of this study is to increase the oxidative stability of sunflower oil by microencapsulation in plant protein matrices using spray-drying technique. Material and methods: Sunflower oil was used as a model substance for oxidable food oils. Proteins from brown rice, hemp, pea, soy and sunflower seeds were used as emulsifiers and microencapsulation wall materials. First, the proteins were solubilized in distilled water. Then, the emulsions were pre-homogenized using a high-speed homogenizer (Ultra-Turrax) and stabilized by using a high-pressure homogenizer (HHP). Drying of the emulsion was performed in a Mini Spray Dryer. The oxidative stability of the encapsulated oil was determined by performing accelerated oxidation tests with a Rancimat. The size of the microparticles was measured using a laser diffraction analyzer. The morphology of the spray-dried microparticles was acquired using environmental scanning microscopy. Results: Pure sunflower oil was used as a reference material. Its induction time was 9.5 ± 0.1 h. The microencapsulation of sunflower oil in pea and soy protein matrices significantly improved its oxidative stability with induction times of 21.3 ± 0.4 h and 12.5 ± 0.4 h respectively. The encapsulation with hemp proteins did not significantly change the oxidative stability of the encapsulated oil. Sunflower and brown rice proteins were ineffective materials for this application, with induction times of 7.2 ± 0.2 h and 7.0 ± 0.1 h respectively. The volume mean diameter of the microparticles formulated with soy and pea proteins were 8.9 ± 0.1 µm and 16.3 ± 1.2 µm respectively. The values for hemp, sunflower and brown rice proteins could not be obtained due to the agglomeration of the microparticles. ESEM images showed smooth and round microparticles with soy and pea proteins. The surfaces of the microparticles obtained with sunflower and hemp proteins were porous. The surface was rough when brown rice proteins were used as the encapsulating agent. Conclusion: Soy and pea proteins appeared to be efficient wall materials for the microencapsulation of sunflower oil by spray drying. These results were partly explained by the higher solubility of soy and pea proteins in water compared to hemp, sunflower, and brown rice proteins. Acknowledgment: This work has been performed, in partnership with the SAS PIVERT, within the frame of the French Institute for the Energy Transition (Institut pour la Transition Energétique (ITE)) P.I.V.E.R.T. (www.institut-pivert.com) selected as an Investments for the Future (Investissements d’Avenir). This work was supported, as part of the Investments for the Future, by the French Government under the reference ANR-001-01.

Keywords: biopolymer, edible oil, microencapsulation, oxidative stability, release, spray-drying

Procedia PDF Downloads 137
7 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 262
6 The Digital Transformation of Life Insurance Sales in Iran With the Emergence of Personal Financial Planning Robots; Opportunities and Challenges

Authors: Pedram Saadati, Zahra Nazari

Abstract:

Anticipating and identifying future opportunities and challenges facing industry activists for the emergence and entry of new knowledge and technologies of personal financial planning, and providing practical solutions is one of the goals of this research. For this purpose, a future research tool based on receiving opinions from the main players of the insurance industry has been used. The research method in this study was in 4 stages; including 1- a survey of the specialist salesforce of life insurance in order to identify the variables 2- the ranking of the variables by experts selected by a researcher-made questionnaire 3- holding a panel of experts with the aim of understanding the mutual effects of the variables and 4- statistical analyzes of the mutual effects matrix in Mick Mac software is done. The integrated analysis of influencing variables in the future has been done with the method of Structural Analysis, which is one of the efficient and innovative methods of future research. A list of opportunities and challenges was identified through a survey of best-selling life insurance representatives who were selected by snowball sampling. In order to prioritize and identify the most important issues, all the issues raised were sent to selected experts who were selected theoretically through a researcher-made questionnaire. The respondents determined the importance of 36 variables through scoring, so that the prioritization of opportunity and challenge variables can be determined. 8 of the variables identified in the first stage were removed by selected experts, and finally, the number of variables that could be examined in the third stage became 28 variables, which, in order to facilitate the examination, were divided into 6 categories, respectively, 11 variables of organization and management. Marketing and sales 7 cases, social and cultural 6 cases, technological 2 cases, rebranding 1 case and insurance 1 case were divided. The reliability of the researcher-made questionnaire was confirmed with the Cronbach's alpha test value of 0.96. In the third stage, by forming a panel consisting of 5 insurance industry experts, the consensus of their opinions about the influence of factors on each other and the ranking of variables was entered into the matrix. The matrix included the interrelationships of 28 variables, which were investigated using the structural analysis method. By analyzing the data obtained from the matrix by Mic Mac software, the findings of the research indicate that the categories of "correct training in the use of the software, the weakness of the technology of insurance companies in personalizing products, using the approach of equipping the customer, and honesty in declaring no need Customer to Insurance", the most important challenges of the influencer and the categories of "salesforce equipping approach, product personalization based on customer needs assessment, customer's pleasant experience of being consulted with consulting robots, business improvement of the insurance company due to the use of these tools, increasing the efficiency of the issuance process and optimal customer purchase" were identified as the most important opportunities for influence.

Keywords: personal financial planning, wealth management, advisor robots, life insurance, digital transformation

Procedia PDF Downloads 46
5 A Brief Review on Doping in Sports and Performance-Enhancing Drugs

Authors: Zahra Mohajer, Afsaneh Soltani

Abstract:

Doping is a major issue in competitive sports and is favored by vast groups of athletes. The feeling of being higher-ranking than others and gaining fame has caused many athletes to misuse drugs. The definition of doping is to use prohibited substances and/or methods that help physical or mental performances or both. Doping counts as the illegal use of chemical substances or drugs, excessive amounts of physiological substances to increase the performance at or out of competition or even the use of inappropriate medications to treat an injury to gain the ability to participate in a competition. The International Olympic Committee (IOC) and World Anti-Doping Agency (WADA) have forbidden these substances to ensure fair and equal competition and also the health of the competitors. As of 2004 WADA has published an international list of illegal substances used for doping, which is updated annually. In the process of the Genome Project scientists have gained the ability to treat numerous diseases by gene therapy, which may result in bodily performance increase and therefore a potential opportunity to misuse by some athletes. Gene doping is defined as the non-therapeutic direct and indirect genetic modifications using genetic materials that can improve the performances in sports events. Biosynthetic drugs are a form of indirect genetic engineering. The method can be performed in three ways such as injecting the DNA directly into the muscle, inserting the genetically engineered cells, or transferring the DNA using a virus as a vector. Erythropoietin is a hormone majorly released by the kidney and in small amounts by the liver. Its function is to stimulate the erythropoiesis and therefore the more production of red blood cells (RBC) which causes an increase in Hemoglobin (Hb). During this process, the oxygen delivery to muscles will increase, which will improve athletic performance and postpone exhaustion. There are ways to increase the oxygen transferred to muscles such as blood transfusion, stimulating the production of red blood cells by using Erythropoietin (EPO), and also using allosteric effectors of Hemoglobin. EPO can either be injected as a protein or can be inserted into the cells as the gene which encodes EPO. Adeno-associated viruses have been employed to deliver the EPO gene to the cells. Employing the genes that naturally exist in the human body such as the EPO gene can reduce the risk of detecting gene doping. The first research about blood doping was conducted in 1947. The study has shown that an increase in hematocrit (HCT) up to 55% following homologous transfusion makes it more unchallenging for the body to perform the exercise at the altitude. Thereafter athletes’ attraction to blood infusion escalated. Also, a study has demonstrated that by reinfusing their own blood 4 weeks after being drawn, three men have shown a rise in Hb level which improved the oxygen uptake, and a delay in exhaustion. The list of performance-enhancing drugs is published by WADA annually and includes the following drugs: anabolic agents, hormones, Beta-2 agonists, Beta-blockers, Diuretics, Stimulants, narcotics, cannabinoids, and corticosteroids.

Keywords: doping, PEDs, sports, WADA

Procedia PDF Downloads 106
4 Identifying Common Sports Injuries in Karate and Presenting a Model for Preventing Identified Injuries (A Case Study of East Azerbaijan, Iranian Karatekas)

Authors: Nadia Zahra Karimi Khiavi, Amir Ghiami Rad

Abstract:

Due to the high likelihood of injuries in karate, karatekas' injuries warrant special treatment. This study explores the prevalence of karate injuries in East Azerbaijan, Iran and provides a model for karatekas to use in the prevention of such injuries. This study employs a descriptive approach. Male and female participants with a brown belt or above in either control or non-control styles in East Azerbaijan province are included in the study's statistical population. A statistical sample size of 100 people was computed using the tools employed (smartpls), and the samples were drawn at random from all clubs in the province with the assistance of the Karate Board in order to give a model for the prevention of karate injuries. Information was gathered by means of a survey that made use of the Standard Questionnaire for Australian Sports Medicine Injury Reports. The information is presented in the form of tables and samples, and descriptive statistics were used to organise and summarise the data. Control and non-control independent t-tests were conducted using SPSS version 20, and structural equation modelling (pls) was utilised for injury prevention modelling at a 0.05 level of significance. The results showed that the most common areas of injury among the control groups were the upper limbs (46.15%), lower limbs (34.61%), trunk (15.38%), and head and neck (3.84%). The most common types of injuries were broken bones (34.61%), sprain or strain (23.13%), bruising and contusions (23.13%), trauma to the face and mouth (11.53%), and damage to the nerves (69.69%). Uncontrolled committees are most likely to sustain injuries to the head and neck (33.33%), trunk (25.92%), upper limbs (22.22%), and lower limbs (18.51%). The most common injuries were to the mouth and face (33.33%), dislocations and fractures (22.22%), aspirin and strain (22.22%), bruises and contusions (18.51%), and nerves (70%), in that order. Among those who practice control kata, injuries to the upper limb account for 45.83%, the lower limb for 41.666%, the trunk for 8.33%, and the head and neck for 4.166%. The most common types of injuries are dislocations and fractures (41.66 per cent), aspirin and strain (29.16 per cent), bruising and bruises (16.66 per cent), and nerves (12.5%). Injuries to the face and mouth were not reported among those practising the control kata. By far, the most common sites of injury for those practising uncontrolled kata were the lower limb (43.74%), upper limb (39.13%), trunk (13.14%), and head and neck (4.34%). The most common types of injuries were dislocations and fractures (34.82%), aspirin and strain (26.08%), bruises and contusions (21.73%), mouth and face (13.14%), and nerves. Teaching the concepts of cooling and warming (0.591) and enhancing the degree of safety in the sports environment (0.413) were shown to play the most essential roles in reducing sports injuries among karate practitioners of controlling and uncontrolled styles, respectively. Use of common sports gear (0.390), Modification of training programme principles (0.341), Formulation of an effective diet plan for athletes (0.284), Evaluation of athletes' physical anatomy, physiology, chemistry, and physics (0.247).

Keywords: sports injuries, karate, prevention, cooling and warming

Procedia PDF Downloads 101
3 The Temporal Pattern of Bumble Bees in Plant Visiting

Authors: Zahra Shakoori, Farid Salmanpour

Abstract:

Pollination services are a vital service for the ecosystem to maintain environmental stability. The decline of pollinators can disrupt the ecological balance by affecting components of biodiversity. Bumble bees are crucial pollinators, playing a vital role in maintaining plant diversity. This study investigated the temporal patterns of their visitation to flowers in Kiasar National Park, Iran. Observations were conducted in Jun 2024, totaling 442 person-minutes of observation. Five species of bumble bees were identified. The study revealed that they consistently visited an average of 12-15 flowers per minute, regardless of species. The findings highlight the importance of protecting natural habitats, where their populations are thriving in the absence of human-induced stressors. This study was conducted in Kiasar National Park, located in the southeast of Mazandaran, northern Iran. The surveyed area, at an altitude of 1800-2200 meters, includes both forest and pasture. Bumble bee surveys were carried out on sunny days from June 2024, starting at dawn and ending at sunset. To avoid double-counting, we systematically searched for foraging habitats on low-sloping ridges with high mud density, frequently moving between patches. We recorded bumble bee visits to flowers and plant species per minute using direct observation, a stopwatch, and a pre-prepared form. We used statistical analysis of variance (ANOVA) with a confidence level of 95% to examine potential differences in foraging rates across different bumble bee species, flowers, plant bases, and plant species visited. Bumble bee identification relied on morphological indicators. A total of 442 person-minutes of bumble bee observations were recorded. Five species of bumble bees (Bombus fragrans, Bombus haematurus, Bombus lucorum, Bombus melanurus, Bombus terrestris) were identified during the study. The results of this study showed that the visits of bumble bees to flower sources were not different from each other. In general, bumble bees visit an average of 12-15 flowers every 60 seconds. In addition, at the same time they visit between 3-5 plant bases. Finally, they visit an average of 1 to 3 plant species per minute. While many taxa contribute to pollination, insects—especially bees—are crucial for maintaining plant diversity and ecosystem functions. As plant diversity increases, the stopping rate of pollinating insects rises, which reduces their foraging activity. Bumble bees, therefore, stop more frequently in natural areas than in agricultural fields due to higher plant diversity. Our findings emphasize the need to protect natural habitats like Kiasar National Park, where bumble bees thrive without human-induced stressors like pesticides, livestock grazing, and pollution. With bumble bee populations declining globally, further research is essential to understand their behavior in different environments and develop effective conservation strategies to protect them.

Keywords: bumble bees, pollination, pollinator, plant diversity, Iran

Procedia PDF Downloads 28
2 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 107
1 Pulmonary Complication of Chronic Liver Disease and the Challenges Identifying and Managing Three Patients

Authors: Aidan Ryan, Nahima Miah, Sahaj Kaur, Imogen Sutherland, Mohamed Saleh

Abstract:

Pulmonary symptoms are a common presentation to the emergency department. Due to a lack of understanding of the underlying pathophysiology, chronic liver disease is not often considered a cause of dyspnea. We present three patients who were admitted with significant respiratory distress secondary to hepatopulmonary syndrome, portopulmonary hypertension, and hepatic hydrothorax. The first is a 27-year-old male with a 6-month history of progressive dyspnea. The patient developed a severe type 1 respiratory failure with a PaO₂ of 6.3kPa and was escalated to critical care, where he was managed with non-invasive ventilation to maintain oxygen saturation. He had an agitated saline contrast echocardiogram, which showed the presence of a possible shunt. A CT angiogram revealed significant liver cirrhosis, portal hypertension, and large para esophageal varices. Ultrasound of the abdomen showed coarse liver echo patter and enlarged spleen. Along with these imaging findings, his biochemistry demonstrated impaired synthetic liver function with an elevated international normalized ratio (INR) of 1.4 and hypoalbuminaemia of 28g/L. The patient was then transferred to a tertiary center for further management. Further investigations confirmed a shunt of 56%, and liver biopsy confirmed cirrhosis suggestive of alpha-1-antitripsyin deficiency. The findings were consistent with a diagnosis of hepatopulmonary syndrome, and the patient is awaiting a liver transplant. The second patient is a 56-year-old male with a 12-month history of worsening dyspnoea, jaundice, confusion. His medical history included liver cirrhosis, portal hypertension, and grade 1 oesophageal varices secondary to significant alcohol excess. On admission, he developed a type 1 respiratory failure with PaO₂ of 6.8kPa requiring 10L of oxygen. CT pulmonary angiogram was negative for pulmonary embolism but showed evidence of chronic pulmonary hypertension, liver cirrhosis, and portal hypertension. An echocardiogram revealed a grossly dilated right heart with reduced function, pulmonary and tricuspid regurgitation, and pulmonary artery pressures estimated at 78mmHg. His biochemical markers showed impaired synthetic liver function with an INR of 3.2, albumin of 29g/L, along with raised bilirubin of 148mg/dL. During his long admission, he was managed with diuretics with little improvement. After three weeks, he was diagnosed with portopulmonary hypertension and was commenced on terlipressin. This resulted in successfully weaning off oxygen, and he was discharged home. The third patient is a 61-year-old male who presented to the local ambulatory care unit for therapeutic paracentesis on a background of decompensated liver cirrhosis. On presenting, he complained of a 2-day history of worsening dyspnoea and a productive cough. Chest x-ray showed a large pleural effusion, increasing in size over the previous eight months, and his abdomen was visibly distended with ascitic fluid. Unfortunately, the patient deteriorated, developing a larger effusion along with an increase in oxygen demand, and passed away. Without underlying cardiorespiratory disease, in the presence of a persistent pleural effusion with underlying decompensated cirrhosis, he was diagnosed with hepatic hydrothorax. While each presented with dyspnoea, the cause and underlying pathophysiology differ significantly from case to case. By describing these complications, we hope to improve awareness and aid prompt and accurate diagnosis, vital for improving outcomes.

Keywords: dyspnea, hepatic hydrothorax, hepatopulmonary syndrome, portopulmonary syndrome

Procedia PDF Downloads 121