Search results for: sensory processing sensitivity (SPS)
900 Immobilization of Superoxide Dismutase Enzyme on Layered Double Hydroxide Nanoparticles
Authors: Istvan Szilagyi, Marko Pavlovic, Paul Rouster
Abstract:
Antioxidant enzymes are the most efficient defense systems against reactive oxygen species, which cause severe damage in living organisms and industrial products. However, their supplementation is problematic due to their high sensitivity to the environmental conditions. Immobilization on carrier nanoparticles is a promising research direction towards the improvement of their functional and colloidal stability. In that way, their applications in biomedical treatments and manufacturing processes in the food, textile and cosmetic industry can be extended. The main goal of the present research was to prepare and formulate antioxidant bionanocomposites composed of superoxide dismutase (SOD) enzyme, anionic clay (layered double hydroxide, LDH) nanoparticle and heparin (HEP) polyelectrolyte. To characterize the structure and the colloidal stability of the obtained compounds in suspension and solid state, electrophoresis, dynamic light scattering, transmission electron microscopy, spectrophotometry, thermogravimetry, X-ray diffraction, infrared and fluorescence spectroscopy were used as experimental techniques. LDH-SOD composite was synthesized by enzyme immobilization on the clay particles via electrostatic and hydrophobic interactions, which resulted in a strong adsorption of the SOD on the LDH surface, i.e., no enzyme leakage was observed once the material was suspended in aqueous solutions. However, the LDH-SOD showed only limited resistance against salt-induced aggregation and large irregularly shaped clusters formed during short term interval even at lower ionic strengths. Since sufficiently high colloidal stability is a key requirement in most of the applications mentioned above, the nanocomposite was coated with HEP polyelectrolyte to develop highly stable suspensions of primary LDH-SOD-HEP particles. HEP is a natural anticoagulant with one of the highest negative line charge density among the known macromolecules. The experimental results indicated that it strongly adsorbed on the oppositely charged LDH-SOD surface leading to charge inversion and to the formation of negatively charged LDH-SOD-HEP. The obtained hybrid materials formed stable suspension even under extreme conditions, where classical colloid chemistry theories predict rapid aggregation of the particles and unstable suspensions. Such a stabilization effect originated from electrostatic repulsion between the particles of the same sign of charge as well as from steric repulsion due to the osmotic pressure raised during the overlap of the polyelectrolyte chains adsorbed on the surface. In addition, the SOD enzyme kept its structural and functional integrity during the immobilization and coating processes and hence, the LDH-SOD-HEP bionanocomposite possessed excellent activity in decomposition of superoxide radical anions, as revealed in biochemical test reactions. In conclusion, due to the improved colloidal stability and the good efficiency in scavenging superoxide radical ions, the developed enzymatic system is a promising antioxidant candidate for biomedical or other manufacturing processes, wherever the aim is to decompose reactive oxygen species in suspensions.Keywords: clay, enzyme, polyelectrolyte, formulation
Procedia PDF Downloads 268899 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 504898 Intrinsically Dual-Doped Conductive Polymer System for Electromagnetic Shielding Applications
Authors: S. Koul, Joshua Adedamola
Abstract:
Currently, the global concerning fact about electromagnetic pollution (EMP) is that it not only adversely affects human health but rather projects the malfunctioning of sensitive equipment both locally and at a global level. The market offers many incumbent technologies to solve the issues, but still, a processable sustainable material solution with acceptable limits for GHG emission is still at an exploratory stage. The present work offers a sustainable material solution with a wide range of processability in terms of a polymeric resin matrix and shielding operational efficiency across the electromagnetic spectrum, covering both ionizing and non-ionizing electromagnetic radiations. The present work offers an in-situ synthesized conducting polyaniline (PANI) in the presence of the hybrid dual dopant system with tuned conductivity and high shielding efficiency between 89 to 92 decibels, depending upon the EMI frequency range. The conductive polymer synthesized in the presence of a hybrid dual dopant system via the in-situ emulsion polymerization method offers a higher surface resistance of 1.0 ohms/cm with thermal stability up to 2450C in their powder form. This conductive polymer with a hybrid dual dopant system was used as a filler material with different polymeric thermoplastic resin systems for the preparation of conductive composites. Intrinsically Conductive polymeric (ICP) composites based on hybrid dual dopant systems were prepared using melt blending, extrusion, and finally by, compression molding processing techniques. ICP composites with hybrid dual dopant systems offered good mechanical, thermal, structural, weathering, and stable surface resistivity properties over a period of time. The preliminary shielding behavior for ICP composites between frequency levels of 10 GHz to 24GHZ offered a shielding efficiency of more than 90 dB.Keywords: ICP, dopant, EMI, shielding
Procedia PDF Downloads 82897 Modeling in the Middle School: Eighth-Grade Students’ Construction of the Summer Job Problem
Authors: Neslihan Sahin Celik, Ali Eraslan
Abstract:
Mathematical model and modeling are one of the topics that have been intensively discussed in recent years. In line with the results of the PISA studies, researchers in many countries have begun to question how much students in school-education system are prepared to solve the real-world problems they encounter in their future professional lives. As a result, many mathematics educators have begun to emphasize the importance of new skills and understanding such as constructing, Hypothesizing, Describing, manipulating, predicting, working together for complex and multifaceted problems for success in beyond the school. When students increasingly face this kind of situations in their daily life, it is important to make sure that students have enough experience to work together and interpret mathematical situations that enable them to think in different ways and share their ideas with their peers. Thus, model eliciting activities are one of main tools that help students to gain experiences and the new skills required. This research study was carried on the town center of a big city located in the Black Sea region in Turkey. The participants were eighth-grade students in a middle school. After a six-week preliminary study, three students in an eighth-grade classroom were selected using criterion sampling technique and placed in a focus group. The focus group of three students was videotaped as they worked on a model eliciting activity, the Summer Job Problem. The conversation of the group was transcribed, examined with students’ written work and then qualitatively analyzed through the lens of Blum’s (1996) modeling processing cycle. The study results showed that eighth grade students can successfully work with the model eliciting, develop a model based on the two parameters and review the whole process. On the other hand, they had difficulties to relate parameters to each other and take all parameters into account to establish the model.Keywords: middle school, modeling, mathematical modeling, summer job problem
Procedia PDF Downloads 337896 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia
Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman
Abstract:
Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.Keywords: mechanistic-empirical pavement design guide (MEPDG), traffic characteristics, materials properties, climate, Riyadh
Procedia PDF Downloads 226895 Vitamin Content of Swordfish (Xhiphias gladius) Affected by Salting and Frying
Authors: L. Piñeiro, N. Cobas, L. Gómez-Limia, S. Martínez, I. Franco
Abstract:
The swordfish (Xiphias gladius) is a large oceanic fish of high commercial value, which is widely distributed in waters of the world’s oceans. They are considered to be an important source of high quality proteins, vitamins and essential fatty acids, although only half of the population follows the recommendation of nutritionists to consume fish at least twice a week. Swordfish is consumed worldwide because of its low fat content and high protein content. It is generally sold as fresh, frozen, and as pieces or slices. The aim of this study was to evaluate the effect of salting and frying on the composition of the water-soluble vitamins (B2, B3, B9 and B12) and fat-soluble vitamins (A, D, and E) of swordfish. Three loins of swordfish from Pacific Ocean were analyzed. All the fishes had a weight between 50 and 70 kg and were transported to the laboratory frozen (-18 ºC). Before the processing, they were defrosted at 4 ºC. Each loin was sliced and salted in brine. After cleaning the slices, they were divided into portions (10×2 cm) and fried in olive oil. The identification and quantification of vitamins were carried out by high-performance liquid chromatography (HPLC), using methanol and 0.010% trifluoroacetic acid as mobile phases at a flow-rate of 0.7 mL min-1. The UV-Vis detector was used for the detection of the water- and fat-soluble vitamins (A and D), as well as the fluorescence detector for the detection of the vitamin E. During salting, water and fat-soluble vitamin contents remained constant, observing an evident decrease in the values of vitamin B2. The diffusion of salt into the interior of the pieces and the loss of constitution water that occur during this stage would be related to this significant decrease. In general, after frying water-soluble and fat-soluble vitamins showed a great thermolability with high percentages of retention with values among 50–100%. Vitamin B3 is the one that exhibited higher percentages of retention with values close to 100%. However, vitamin B9 presented the highest losses with a percentage of retention of less than 20%.Keywords: frying, HPLC, salting, swordfish, vitamins
Procedia PDF Downloads 126894 Interactive IoT-Blockchain System for Big Data Processing
Authors: Abdallah Al-ZoubI, Mamoun Dmour
Abstract:
The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.Keywords: IoT devices, blockchain, Ethereum, big data
Procedia PDF Downloads 150893 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System
Authors: Abdul-Rahman Al-Ali
Abstract:
As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances
Procedia PDF Downloads 324892 A Thermo-mechanical Finite Element Model to Predict Thermal Cycles and Residual Stresses in Directed Energy Deposition Technology
Authors: Edison A. Bonifaz
Abstract:
In this work, a numerical procedure is proposed to design dense multi-material structures using the Directed Energy Deposition (DED) process. A thermo-mechanical finite element model to predict thermal cycles and residual stresses is presented. A numerical layer build-up procedure coupled with a moving heat flux was constructed to minimize strains and residual stresses that result in the multi-layer deposition of an AISI 316 austenitic steel on an AISI 304 austenitic steel substrate. To simulate the DED process, the automated interface of the ABAQUS AM module was used to define element activation and heat input event data as a function of time and position. Of this manner, the construction of ABAQUS user-defined subroutines was not necessary. Thermal cycles and thermally induced stresses created during the multi-layer deposition metal AM pool crystallization were predicted and validated. Results were analyzed in three independent metal layers of three different experiments. The one-way heat and material deposition toolpath used in the analysis was created with a MatLab path script. An optimal combination of feedstock and heat input printing parameters suitable for fabricating multi-material dense structures in the directed energy deposition metal AM process was established. At constant power, it can be concluded that the lower the heat input, the lower the peak temperatures and residual stresses. It means that from a design point of view, the one-way heat and material deposition processing toolpath with the higher welding speed should be selected.Keywords: event series, thermal cycles, residual stresses, multi-pass welding, abaqus am modeler
Procedia PDF Downloads 69891 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components
Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea
Abstract:
Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.Keywords: assessment, part of speech, sentiment analysis, student feedback
Procedia PDF Downloads 142890 Olive-Mill Wastewater and Organo-Mineral Fertlizers Application for the Control of Parasitic Weed Phelipanche ramosa L. Pomel in Tomato
Authors: Grazia Disciglio, Francesco Lops, Annalisa Tarantino, Emanuele Tarantino
Abstract:
The parasitic weed specie Phelipanche ramosa (L) Pomel is one of the major constraints in tomato crop in Apulia region (southern Italy). The experimental was considered to investigate the effect of six organic compounds (Olive miller wastewater, Allil isothiocyanate®, Alfa plus K®, Radicon®, Rizosum Max®, Kendal Nem®) on the naturally infested field of tomato growing season in 2016. The randomized block design with 3 replicates was adopted. Tomato seedling were transplant on 19 May 2016. During the growing cycle of the tomato at 74, 81, 93 and 103 days after transplantation (DAT), the number of parasitic shoots (branched plants) that had emerged in each plot was determined. At harvesting on 13 September 2016 the major quanti-qualitative yield parameters were determined, including marketable yield, mean weight, dry matter, soluble solids, fruit colour, pH and titratable acidity. The treatments provided the results show that none of treatments provided complete control against P. ramosa. However, among the products tested Olive miller wastewater, Alfa plus K®, Rizosum Max® and Kendal Nem® products applied to the soil show the number of emerged shoots significantly lower than Radicon® and especially than the Allil isothiocyanate® treatment and the untreated control. Regarding the effect of different treatments on the tomato productive parameters, the marketable yield resulted significantly higher in the same mentioned treatments which gave the lower P. ramosa infestation. No significative differences for the other fruit characteristics were observed.Keywords: processing tomato crop, Phelipanche ramosa, olive-mill wastewater, organic fertilizers
Procedia PDF Downloads 325889 Integrated Decision Support for Energy/Water Planning in Zayandeh Rud River Basin in Iran
Authors: Safieh Javadinejad
Abstract:
In order to make well-informed decisions respecting long-term system planning, resource managers and policy creators necessitate to comprehend the interconnections among energy and water utilization and manufacture—and also the energy-water nexus. Planning and assessment issues contain the enhancement of strategies for declining the water and energy system’s vulnerabilities to climate alteration with also emissions of decreasing greenhouse gas. In order to deliver beneficial decision support for climate adjustment policy and planning, understanding the regionally-specific features of the energy-water nexus, and the history-future of the water and energy source systems serving is essential. It will be helpful for decision makers understand the nature of current water-energy system conditions and capacity for adaptation plans for future. This research shows an integrated hydrology/energy modeling platform which is able to extend water-energy examines based on a detailed illustration of local circumstances. The modeling links the Water Evaluation and Planning (WEAP) and the Long Range Energy Alternatives Planning (LEAP) system to create full picture of water-energy processes. This will allow water managers and policy-decision makers to simply understand links between energy system improvements and hydrological processing and realize how future climate change will effect on water-energy systems. The Zayandeh Rud river basin in Iran is selected as a case study to show the results and application of the analysis. This region is known as an area with large integration of both the electric power and water sectors. The linkages between water, energy and climate change and possible adaptation strategies are described along with early insights from applications of the integration modeling system.Keywords: climate impacts, hydrology, water systems, adaptation planning, electricity, integrated modeling
Procedia PDF Downloads 292888 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 19887 Effect of Microstructure and Texture of Magnesium Alloy Due to Addition of Pb
Authors: Yebeen Ji, Jimin Yun, Kwonhoo Kim
Abstract:
Magnesium alloys were limited for industrial applications due to having a limited slip system and high plastic anisotropy. It has been known that specific textures were formed during processing (rolling, etc.), and These textures cause poor formability. To solve these problems, many researchers have studied controlling texture by adding rare-earth elements. However, the high cost limits their use; therefore, alternatives are needed to replace them. Although Pb addition doesn’t directly improve magnesium properties, it has been known to suppress the diffusion of other alloying elements and reduce grain boundary energy. These characteristics are similar to the additions of rare-earth elements, and a similar texture behavior is expected as well. However, there is insufficient research on this. Therefore, this study investigates the behavior of texture and microstructure development after adding Pb to magnesium. This study compared and analyzed AZ61 alloy and Mg-15wt%Pb alloy to determine the effect of adding solute elements. The alloy was hot rolled and annealed to form a single phase and initial texture. Afterward, the specimen was set to contraction and elongate parallel to the rolling surface and the rolling direction and then subjected to high-temperature plane strain compression under the conditions of 723K and 0.05/s. Microstructural analysis and texture measurements were performed by SEM-EBSD. The peak stress in the true strain-stress curve after compression was higher in AZ61, but the shape of the flow curve was similar for both alloys. For both alloys, continuous dynamic recrystallization was confirmed to occur during the compression process. The basal texture developed parallel to the compressed surface, and the pole density was lower in the Mg-15wt%Pb alloy. It is confirmed that this change in behavior is because the orientation distribution of recrystallized grains has a more random orientation compared to the parent grains when Pb is added.Keywords: Mg, texture, Pb, DRX
Procedia PDF Downloads 49886 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 318885 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites
Authors: S. D. El Wakil, M. Pladsen
Abstract:
Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites
Procedia PDF Downloads 390884 Optimization of Perfusion Distribution in Custom Vascular Stent-Grafts Through Patient-Specific CFD Models
Authors: Scott M. Black, Craig Maclean, Pauline Hall Barrientos, Konstantinos Ritos, Asimina Kazakidi
Abstract:
Aortic aneurysms and dissections are leading causes of death in cardiovascular disease. Both inevitably lead to hemodynamic instability without surgical intervention in the form of vascular stent-graft deployment. An accurate description of the aortic geometry and blood flow in patient-specific cases is vital for treatment planning and long-term success of such grafts, as they must generate physiological branch perfusion and in-stent hemodynamics. The aim of this study was to create patient-specific computational fluid dynamics (CFD) models through a multi-modality, multi-dimensional approach with boundary condition optimization to predict branch flow rates and in-stent hemodynamics in custom stent-graft configurations. Three-dimensional (3D) thoracoabdominal aortae were reconstructed from four-dimensional flow-magnetic resonance imaging (4D Flow-MRI) and computed tomography (CT) medical images. The former employed a novel approach to generate and enhance vessel lumen contrast via through-plane velocity at discrete, user defined cardiac time steps post-hoc. To produce patient-specific boundary conditions (BCs), the aortic geometry was reduced to a one-dimensional (1D) model. Thereafter, a zero-dimensional (0D) 3-Element Windkessel model (3EWM) was coupled to each terminal branch to represent the distal vasculature. In this coupled 0D-1D model, the 3EWM parameters were optimized to yield branch flow waveforms which are representative of the 4D Flow-MRI-derived in-vivo data. Thereafter, a 0D-3D CFD model was created, utilizing the optimized 3EWM BCs and a 4D Flow-MRI-obtained inlet velocity profile. A sensitivity analysis on the effects of stent-graft configuration and BC parameters was then undertaken using multiple stent-graft configurations and a range of distal vasculature conditions. 4D Flow-MRI granted unparalleled visualization of blood flow throughout the cardiac cycle in both the pre- and postsurgical states. Segmentation and reconstruction of healthy and stented regions from retrospective 4D Flow-MRI images also generated 3D models with geometries which were successfully validated against their CT-derived counterparts. 0D-1D coupling efficiently captured branch flow and pressure waveforms, while 0D-3D models also enabled 3D flow visualization and quantification of clinically relevant hemodynamic parameters for in-stent thrombosis and graft limb occlusion. It was apparent that changes in 3EWM BC parameters had a pronounced effect on perfusion distribution and near-wall hemodynamics. Results show that the 3EWM parameters could be iteratively changed to simulate a range of graft limb diameters and distal vasculature conditions for a given stent-graft to determine the optimal configuration prior to surgery. To conclude, this study outlined a methodology to aid in the prediction post-surgical branch perfusion and in-stent hemodynamics in patient specific cases for the implementation of custom stent-grafts.Keywords: 4D flow-MRI, computational fluid dynamics, vascular stent-grafts, windkessel
Procedia PDF Downloads 181883 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries
Authors: Gaurav Kumar Sinha
Abstract:
In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency
Procedia PDF Downloads 64882 Influence of Glenohumeral Joint Approximation Technique on the Cardiovascular System in the Acute Phase after Stroke
Authors: Iva Hereitova, Miroslav Svatek, Vit Novacek
Abstract:
Background and Aim: Autonomic imbalance is one of the complications for immobilized patients in the acute stage after a stroke. The predominance of sympathetic activity significantly increases cardiac activity. The technique of glenohumeral joint approximation may contribute in a non-pharmacological way to the regulation of blood pressure and heart rate in patients in this risk group. The aim of the study was to evaluate the effect of glenohumeral joint approximation on the change in heart rate and blood pressure in immobilized patients in the acute phase after a stroke. Methods: The experimental study bilaterally evaluated heart rate, systolic and diastolic pressure values before and after glenohumeral joint approximation in 40 immobilized participants (72.6 ± 10.2 years) in the acute phase after stroke. The experimental group was compared with 40 healthy participants in the control group (68.6 ± 14.2 years). An SpO2 vital signs monitor and a validated Microlife WatchBP Office blood pressure monitor were used for evaluation. Statistical processing and evaluation were performed in MATLAB R2019 (The Math Works®, Inc., Natick, MA, USA). Results: Approximation of the glenohumeral joint resulted in a statistically significant decrease in systolic and diastolic pressure. An average decrease in systolic pressure for individual groups ranged from 8.2 to 11.3 mmHg (p <0.001). For diastolic pressure, the average decrease ranged from 5.0 - 14.2 mmHg (p <0.001). There was a statistically significant reduction in heart rate (p <0.01) only in patients after ischemic stroke in the inferior cerebral artery. There was the average decrease in heart rate of 3.9 beats per minute (median 4 beats per minute). Conclusion: Approximation of the glenohumeral joint leads to a statistically significant decrease in systolic and diastolic pressure in immobilized patients in the acute phase after stroke.Keywords: Aproximation technique, Cardiovaskular system, Glenohumeral joint, Stroke
Procedia PDF Downloads 216881 The Effect of Restaurant Residuals on Performance of Japanese Quail
Authors: A. A. Saki, Y. Karimi, H. J. Najafabadi, P. Zamani, Z. Mostafaie
Abstract:
The restaurant residuals reasons such as competition between human and animal consumption of cereals, increasing environmental pollution and the high cost of production of livestock products is important. Therefore, in this restaurant residuals have a high nutritional value (protein and high energy) that it is possible can replace some of the poultry diets are especially Japanese quail. Today, the challenges of processing and consumption of these lesions occurring in modern industry would be confronting. Increasing costs, pressures, and problems associated with waste excretion, the need for re-evaluation and utilization of waste to livestock and poultry feed fortifies. This study aimed to investigate the effects of different levels of restaurant residuals on performance of 300 layer Japanese quails. This experiment included 5 treatments, 4 replicates, and 15 quails in each from 10 to 18 weeks age in a completely randomized design (CRD). The treatments consist of basal diet including corn and soybean meal (without residual restaurants), and treatments 2, 3, 4 and 5, includes a basal diet containing 5, 10, 15 and 20% of restaurant residuals, respectively. There were no significant effect of restaurant residuals levels on body weight (BW), feed conversion ratio (FCR), percentage of egg production (EP), egg mass (EM) between treatments (P > 0/05). However, feed intake (FI) of 5% restaurant residual was significantly higher than 20% treatment (P < 0/05). Egg weight (EW) was also higher by receiving 20% restaurant residuals compared with 10% in this respect (P < 0/05). Yolk weight (YW) of treatments containing 10 and 20% of the residual restaurant were significantly higher than control (P < 0/05). Eggs white weight (EWW) of 20 and 5% restaurants residual treatments were significantly increased compared by 10% (P < 0/05). Furthermore, EW, egg weight to shell surface area and egg surface area in 20% treatment were significantly higher than control and 10% treatment (P < 0/05). The overall results of this study have shown that restaurant residuals for laying quail diets in levels of 10 and 15 percent could be replaced with a part of the quail ration without any adverse effect.Keywords: by-product, laying quail, performance, restaurant residuals
Procedia PDF Downloads 166880 The Web of Injustice: Untangling Violations of Personality Rights in European International Private Law
Authors: Sara Vora (Hoxha)
Abstract:
Defamation, invasion of privacy, and cyberbullying have all increased in tandem with the growth of the internet. European international private law may struggle to deal with such transgressions if they occur in many jurisdictions. The current study examines how effectively the legal system of European international private law addresses abuses of personality rights in cyberspace. The study starts by discussing how established legal frameworks are being threatened by online personality rights abuses. The article then looks into the rules and regulations of European international private law that are in place to handle overseas lawsuits. This article examines the different elements that courts evaluate when deciding which law to use in a particular case, focusing on the concepts of jurisdiction, choice of law, and recognition and execution of foreign judgements. Next, the research analyses the function of the European Union in preventing and punishing online personality rights abuses. Key pieces of law that control the collecting and processing of personal data on the Internet, including the General Data Protection Regulation (GDPR) and the e-Commerce Directive, are discussed. In addition, this article investigates how the ECtHR handles cases involving the infringement of personal freedoms, including privacy and speech. The article finishes with an assessment of how well the legal framework of European international private law protects individuals' right to privacy online. It draws attention to problems with the present legal structure, such as the inability to enforce international judgements, the inconsistency between national laws, and the necessity for stronger measures to safeguard people' rights online. This paper concludes that while European international private law provides a useful framework for dealing with violations of personality rights online, further harmonisation and stronger enforcement mechanisms are necessary to effectively protect individuals' rights in the digital age.Keywords: European international private law, personality rights, internet, jurisdiction, cross-border disputes, data protection
Procedia PDF Downloads 75879 The Predictive Implication of Executive Function and Language in Theory of Mind Development in Preschool Age Children
Authors: Michael Luc Andre, Célia Maintenant
Abstract:
Theory of mind is a milestone in child development which allows children to understand that others could have different mental states than theirs. Understanding the developmental stages of theory of mind in children leaded researchers on two Connected research problems. In one hand, the link between executive function and theory of mind, and on the other hand, the relationship of theory of mind and syntax processing. These two lines of research involved a great literature, full of important results, despite certain level of disagreement between researchers. For a long time, these two research perspectives continue to grow up separately despite research conclusion suggesting that the three variables should implicate same developmental period. Indeed, our goal was to study the relation between theory of mind, executive function, and language via a unique research question. It supposed that between executive function and language, one of the two variables could play a critical role in the relationship between theory of mind and the other variable. Thus, 112 children aged between three and six years old were recruited for completing a receptive and an expressive vocabulary task, a syntax understanding task, a theory of mind task, and three executive function tasks (inhibition, cognitive flexibility and working memory). The results showed significant correlations between performance on theory of mind task and performance on executive function domain tasks, except for cognitive flexibility task. We also found significant correlations between success on theory of mind task and performance in all language tasks. Multiple regression analysis justified only syntax and general abilities of language as possible predictors of theory of mind performance in our preschool age children sample. The results were discussed in the perspective of a great role of language abilities in theory of mind development. We also discussed possible reasons that could explain the non-significance of executive domains in predicting theory of mind performance, and the meaning of our results for the literature.Keywords: child development, executive function, general language, syntax, theory of mind
Procedia PDF Downloads 64878 Physicochemical Characterization of Asphalt Ridge Froth Bitumen
Authors: Nader Nciri, Suil Song, Namho Kim, Namjun Cho
Abstract:
Properties and compositions of bitumen and bitumen-derived liquids have significant influences on the selection of recovery, upgrading and refining processes. Optimal process conditions can often be directly related to these properties. The end uses of bitumen and bitumen products are thus related to their compositions. Because it is not possible to conduct a complete analysis of the molecular structure of bitumen, characterization must be made in other terms. The present paper focuses on physico-chemical analysis of two different types of bitumens. These bitumen samples were chosen based on: the original crude oil (sand oil and crude petroleum), and mode of process. The aim of this study is to determine both the manufacturing effect on chemical species and the chemical organization as a function of the type of bitumen sample. In order to obtain information on bitumen chemistry, elemental analysis (C, H, N, S, and O), heavy metal (Ni, V) concentrations, IATROSCAN chromatography (thin layer chromatography-flame ionization detection), FTIR spectroscopy, and 1H NMR spectroscopy have all been used. The characterization includes information about the major compound types (saturates, aromatics, resins and asphaltenes) which can be compared with similar data for other bitumens, more importantly, can be correlated with data from petroleum samples for which refining characteristics are known. Examination of Asphalt Ridge froth bitumen showed that it differed significantly from representative petroleum pitches, principally in their nonhydrocarbon content, heavy metal content and aromatic compounds. When possible, properties and composition were related to recovery and refining processes. This information is important because of the effects that composition has on recovery and processing reactions.Keywords: froth bitumen, oil sand, asphalt ridge, petroleum pitch, thin layer chromatography-flame ionization detection, infrared spectroscopy, 1H nuclear magnetic resonance spectroscopy
Procedia PDF Downloads 427877 Influence of Magnetic Field on Microstructure and Properties of Copper-Silver Composites
Authors: Engang Wang
Abstract:
The Cu-alloy composites are a kind of high-strength and high-conductivity Cu-based alloys, which have excellent mechanical and electrical properties and is widely used in electronic, electrical, machinery industrial fields. However, the solidification microstructure of the composites, such as the primary or second dendrite arm spacing, have important rule to its tensile strength and conductivity, and that is affected by its fabricating method. In this paper, two kinds of directional solidification methods; the exothermic powder method (EP method) and liquid metal cooling method (LMC method), were used to fabricate the Cu-alloy composites with applied different magnetic fields to investigate their influence on the solidifying microstructure of Cu-alloy, and further the fabricated Cu-alloy composites was drawn to wires to investigate the influence of fabricating method and magnetic fields on the drawing microstructure of fiber-reinforced Cu-alloy composites and its properties. The experiment of Cu-Ag alloy under directional solidification and horizontal magnetic fields with different processing parameters show that: 1) For the Cu-Ag alloy with EP method, the dendrite is directionally developed in the cooling copper mould and the solidifying microstructure is effectively refined by applying horizontal magnetic fields. 2) For the Cu-Ag alloy with LMC method, the primary dendrite arm spacing is decreased and the content of Ag in the dendrite increases as increasing the drawing velocity of solidification. 3) The dendrite is refined and the content of Ag in the dendrite increases as increasing the magnetic flux intensity; meanwhile, the growth direction of dendrite is also affected by magnetic field. The research results of Cu-Ag alloy in situ composites by drawing deforming process show that the micro-hardness of alloy is higher by decreasing dendrite arm spacing. When the dendrite growth orientation is consistent with the axial of the samples. the conductivity of the composites increases with the second dendrite arm spacing increases. However, its conductivity reduces with the applied magnetic fields owing to disrupting the dendrite growth orientation.Keywords: Cu-Ag composite, magnetic field, microstructure, solidification
Procedia PDF Downloads 214876 Bias Minimization in Construction Project Dispute Resolution
Authors: Keyao Li, Sai On Cheung
Abstract:
Incorporation of alternative dispute resolution (ADR) mechanism has been the main feature of current trend of construction project dispute resolution (CPDR). ADR approaches have been identified as efficient mechanisms and are suitable alternatives to litigation and arbitration. Moreover, the use of ADR in this multi-tiered dispute resolution process often leads to repeated evaluations of a same dispute. Multi-tiered CPDR may become a breeding ground for cognitive biases. When completed knowledge is not available at the early tier of construction dispute resolution, disputing parties may form preconception of the dispute matter or the counterpart. This preconception would influence their information processing in the subsequent tier. Disputing parties tend to search and interpret further information in a self-defensive way to confirm their early positions. Their imbalanced information collection would boost their confidence in the held assessments. Their attitudes would be hardened and difficult to compromise. The occurrence of cognitive bias, therefore, impedes efficient dispute settlement. This study aims to explore ways to minimize bias in CPDR. Based on a comprehensive literature review, three types of bias minimizing approaches were collected: strategy-based, attitude-based and process-based. These approaches were further operationalized into bias minimizing measures. To verify the usefulness and practicability of these bias minimizing measures, semi-structured interviews were conducted with ten CPDR third party neutral professionals. All of the interviewees have at least twenty years of experience in facilitating settlement of construction dispute. The usefulness, as well as the implications of the bias minimizing measures, were validated and suggested by these experts. There are few studies on cognitive bias in construction management in general and in CPDR in particular. This study would be the first of its type to enhance the efficiency of construction dispute resolution by highlighting strategies to minimize the biases therein.Keywords: bias, construction project dispute resolution, minimization, multi-tiered, semi-structured interview
Procedia PDF Downloads 186875 Increasing Prevalence of Multi-Allergen Sensitivities in Patients with Allergic Rhinitis and Asthma in Eastern India
Authors: Sujoy Khan
Abstract:
There is a rising concern with increasing allergies affecting both adults and children in rural and urban India. Recent report on adults in a densely populated North Indian city showed sensitization rates for house dust mite, parthenium, and cockroach at 60%, 40% and 18.75% that is now comparable to allergy prevalence in cities in the United States. Data from patients residing in the eastern part of India is scarce. A retrospective study (over 2 years) was done on patients with allergic rhinitis and asthma where allergen-specific IgE levels were measured to see the aero-allergen sensitization pattern in a large metropolitan city of East India. Total IgE and allergen-specific IgE levels were measured using ImmunoCAP (Phadia 100, Thermo Fisher Scientific, Sweden) using region-specific aeroallergens: Dermatophagoides pteronyssinus (d1); Dermatophagoides farinae (d2); cockroach (i206); grass pollen mix (gx2) consisted of Cynodon dactylon, Lolium perenne, Phleum pratense, Poa pratensis, Sorghum halepense, Paspalum notatum; tree pollen mix (tx3) consisted of Juniperus sabinoides, Quercus alba, Ulmus americana, Populus deltoides, Prosopis juliflora; food mix 1 (fx1) consisted of Peanut, Hazel nut, Brazil nut, Almond, Coconut; mould mix (mx1) consisted of Penicillium chrysogenum, Cladosporium herbarum, Aspergillus fumigatus, Alternaria alternate; animal dander mix (ex1) consisted of cat, dog, cow and horse dander; and weed mix (wx1) consists of Ambrosia elatior, Artemisia vulgaris, Plantago lanceolata, Chenopodium album, Salsola kali, following manufacturer’s instructions. As the IgE levels were not uniformly distributed, median values were used to represent the data. 92 patients with allergic rhinitis and asthma (united airways disease) were studied over 2 years including 21 children (age < 12 years) who had total IgE and allergen-specific IgE levels measured. The median IgE level was higher in 2016 than in 2015 with 60% of patients (adults and children) being sensitized to house dust mite (dual positivity for Dermatophagoides pteronyssinus and farinae). Of 11 children in 2015, whose total IgE ranged from 16.5 to >5000 kU/L, 36% of children were polysensitized (≥4 allergens), and 55% were sensitized to dust mites. Of 10 children in 2016, total IgE levels ranged from 37.5 to 2628 kU/L, and 20% were polysensitized with 60% sensitized to dust mites. Mould sensitivity was 10% in both of the years in the children studied. A consistent finding was that ragweed sensitization (molecular homology to Parthenium hysterophorus) appeared to be increasing across all age groups, and throughout the year, as reported previously by us where 25% of patients were sensitized. In the study sample overall, sensitizations to dust mite, cockroach, and parthenium were important risks in our patients with moderate to severe asthma that reinforces the importance of controlling indoor exposure to these allergens. Sensitizations to dust mite, cockroach and parthenium allergens are important predictors of asthma morbidity not only among children but also among adults in Eastern India.Keywords: aAeroallergens, asthma, dust mite, parthenium, rhinitis
Procedia PDF Downloads 200874 Mastopexy with the "Dermoglandular Autоaugmentation" Method. Increased Stability of the Result. Personalized Technique
Authors: Maksim Barsakov
Abstract:
Introduction. In modern plastic surgery, there are a large number of breast lift techniques.Due to the spreading information about the "side effects" of silicone implants, interest in implant-free mastopexy is increasing year after year. However, despite the variety of techniques, patients sometimes do not get full satisfaction from the results of mastopexy because of the unexpressed filling of the upper pole, extended anchoring postoperative scars and sometimes because of obtaining an aesthetically unattractive breast shape. The stability of the result after mastopexy depends on many factors, including postoperative rehabilitation. Stability of weight and hormonal background, stretchability of tissues. The high recurrence rate of ptosis and short-term aesthetic effect of mastopexy indicate the urgency of improving surgical techniques and increasing the stabilization of breast tissue. Purpose of the study. To develop and introduce into practice a technique of mastopexy based on the use of a modified Ribeiro flap, as well as elements of tissue movement and fixation designed to increase the stability of postoperative mastopexy. In addition, to give indications for the application of this surgical technique. Materials and Methods. it operated on 103 patients aged 18 to 53 years from 2019 to 2023 according to the reported method. These were patients with primary mastopexy, secondary mastopexy, and also patient with implant removal and one-stage mastopexy. The patients were followed up for 12 months to assess the stability of the result. Results and their discussion. Observing the patients, we noted greater stability of the breast shape and upper pole filling compared to the conventional classical methods. We did not have to resort to anchoring scars. In 90 percent of cases, a inverted T-shape scar was used. In 10 percent, the J-scar was used. The quantitative distribution of complications identified among the operated patients is as follows: worsened healing of the junction of vertical and horizontal sutures at the period of 1-1.5 months after surgery - 15 patients; at treatment with ointment method healing was observed in 7-30 days; permanent loss of NAC sensitivity - 0 patients; vascular disorders in the area of NAC/areola necrosis - 0 patients; marginal necrosis of the areola-2 patients. independent healing within 3-4 weeks without aesthetic defects. Aesthetically unacceptable mature scars-3 patients; partial liponecrosis of the autoflap unilaterally - 1 patient. recurrence of ptosis - 1 patient (after weight loss of 12 kg). In the late postoperative period, 2 patients became pregnant, gave birth, and no lactation problems were observed. Conclusion. Thus, in the world of plastic surgery methods of breast lift continue to improve, which is especially relevant in modern times, due to the increased attention to this operation. The author's proposed method of mastopexy with glandular autoflap allows obtaining in most cases a stable result, a fuller breast shape, avoiding the presence of extended anchoring scars, and also preserves the possibility of lactation. The author of this article has obtained a patent for invention for this method of mastopexy.Keywords: mastopexy, mammoplasty, autoflap, personal technique
Procedia PDF Downloads 39873 Sustainable Crop Mechanization among Small Scale Rural Farmers in Nigeria: The Hurdles
Authors: Charles Iledun Oyewole
Abstract:
The daunting challenge that the ‘man with the hoe’ is going to face in the coming decades will be complex and interwoven. With global population already above 7 billion people, it has been estimated that food (crop) production must more than double by 2050 to meet up with the world’s food requirements. Nigeria population is also expected to reach over 240 million people by 2050, at the current annual population growth of 2.61 per cent. The country’s farming population is estimated at over 65 per cent, but the country still depends on food importation to complement production. The small scale farmer, who depends on simple hand tools: hoes and cutlasses, remains the centre of agricultural production, accounting for 90 per cent of the total agricultural output and 80 per cent of the market flow. While the hoe may have been a tool for sustainable development at a time in human history, this role has been smothered by population growth, which has brought too many mouths to be fed (over 170 million), as well as many industries to fuel with raw materials. It may then be argued that the hoe is unfortunately not a tool for the coming challenges and that agricultural mechanization should be the focus. However, agriculture as an enterprise is a ‘complete wheel’ which does not work when broken, particularly, in respect to mechanization. Generally, mechanization will prompt increase production, where land is readily available; increase production, will require post-harvest handling mechanisms, crop processing and subsequent storage. An important aspect of this is readily available and favourable markets for such produce; fuel by good agricultural policies. A break in this wheel will lead to the process of mechanization crashing back to subsistence production, and probably reversal to the hoe. The focus of any agricultural policy should be to chart a course for sustainable mechanization that is environmentally friendly, that may ameliorate Nigeria’s food and raw material gaps. This is the focal point of this article.Keywords: Crop production, Farmer, Hoes, Mechanization, Policy framework, Population, Growth, Rural areas
Procedia PDF Downloads 222872 Experimenting the Influence of Input Modality on Involvement Load Hypothesis
Authors: Mohammad Hassanzadeh
Abstract:
As far as incidental vocabulary learning is concerned, the basic contention of the Involvement Load Hypothesis (ILH) is that retention of unfamiliar words is, generally, conditional upon the degree of involvement in processing them. This study examined input modality and incidental vocabulary uptake in a task-induced setting whereby three variously loaded task types (marginal glosses, fill-in-task, and sentence-writing) were alternately assigned to one group of students at Allameh Tabataba’i University (n=2l) during six classroom sessions. While one round of exposure was comprised of the audiovisual medium (TV talk shows), the second round consisted of textual materials with approximately similar subject matter (reading texts). In both conditions, however, the tasks were equivalent to one another. Taken together, the study pursued the dual objectives of establishing a litmus test for the ILH and its proposed values of ‘need’, ‘search’ and ‘evaluation’ in the first place. Secondly, it sought to bring to light the superiority issue of exposure to audiovisual input versus the written input as far as the incorporation of tasks is concerned. At the end of each treatment session, a vocabulary active recall test was administered to measure their incidental gains. Running a one-way analysis of variance revealed that the audiovisual intervention yielded higher gains than the written version even when differing tasks were included. Meanwhile, task 'three' (sentence-writing) turned out the most efficient in tapping learners' active recall of the target vocabulary items. In addition to shedding light on the superiority of audiovisual input over the written input when circumstances are relatively held constant, this study for the most part, did support the underlying tenets of ILH.Keywords: Keywords— Evaluation, incidental vocabulary learning, input mode, Involvement Load Hypothesis, need, search.
Procedia PDF Downloads 279871 The Multiple Sclerosis condition and the Role of Varicella-zoster virus in its Progression
Authors: Sina Mahdavi, Mahdi Asghari Ozma
Abstract:
Multiple sclerosis (MS) is the most common inflammatory autoimmune disease of the CNS that affects the myelination process in the central nervous system (CNS). Complex interactions of various "environmental or infectious" factors may act as triggers in autoimmunity and disease progression. The association between viral infections, especially human Varicella-zoster virus (VZV) and MS is one potential cause that is not well understood. This study aims to summarize the available data on VZV retrovirus infection in MS disease progression. For this study, the keywords "Multiple sclerosis", " Human Varicella-zoster virus ", and "central nervous system" in the databases PubMed, Google Scholar, Sid, and MagIran between 2016 and 2022 were searched and 14 articles were chosen, studied, and analyzed. Analysis of the amino acid sequences of HNRNPA1 with VZV proteins has shown a 62% amino acid sequence similarity between VZV gE and the PrLD/M9 epitope region (TNPO1 binding domain) of mutant HNRNPA1. A heterogeneous nuclear ribonucleoprotein (hnRNP), which is produced by HNRNPA1, is involved in the processing and transfer of mRNA and pre-mRNA. Mutant HNRNPA1 mimics gE of VZV as an antigen that leads to autoantibody production. Mutant HnRNPA1 translocates to the cytoplasm, after aggregation is presented by MHC class I, followed by CD8 + cells. Of these, antibodies and immune cells against the gE epitopes of VZV remain due to the memory immune response, causing neurodegeneration and the development of MS in genetically predisposed individuals. VZV expression during the course of MS is present in genetically predisposed individuals with HNRNPA1 mutation, suggesting a link between VZV and MS, and that this virus may play a role in the development of MS by inducing an inflammatory state. Therefore, measures to modulate VZV expression may be effective in reducing inflammatory processes in demyelinated areas of MS patients in genetically predisposed individuals.Keywords: multiple sclerosis, varicella-zoster virus, central nervous system, autoimmunity
Procedia PDF Downloads 76