Search results for: variations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1461

Search results for: variations

441 Employing Artificial Intelligence Tools in Making Clothing Designs Inspired by the Najdi Art of Sadu

Authors: Basma Abdel Mohsen Al-Sheikh

Abstract:

This study aimed to create textile designs inspired by Najdi Al-Sadu art, with the objective of highlighting Saudi identity and heritage. The research proposed clothing designs for women and children, utilizing textiles inspired by Najdi Al-Sadu art, and incorporated artificial intelligence techniques in the design process. The study employed a descriptive-analytical approach to describe Najdi Al-Sadu, and an experimental method involving the creation of textile designs inspired by Al-Sadu. The study sample consisted of 33 participants, including experts in the fashion and textile industry, fashion designers, lecturers, professors, and postgraduate students from King Abdulaziz University. A questionnaire was used as a tool to gather opinions regarding the proposed designs. The results demonstrated a clear acceptance of the designs inspired by Najdi Al-Sadu and incorporating artificial intelligence, with approval rates ranging from 22% to 81% across different designs. The study concluded that artificial intelligence applications have a significant impact on fashion design, particularly in the integration of Al-Sadu art. The findings also indicated a positive reception of the designs in terms of their aesthetic and functional aspects, although individual preferences led to some variations in opinions. The results highlighted a demand for designs that combine heritage and modern fashion, striking a balance between authenticity and contemporary style. The study recommended that designers continue to explore ways to integrate cultural heritage, such as Al-Sadu art, with contemporary design elements to achieve this balance. Furthermore, it emphasized the importance of enhancing the aesthetic and functional aspects of designs, taking into consideration the preferences of the target market and customer expectations. The effective utilization of artificial intelligence was also emphasized to improve design processes, expand creative possibilities, and foster innovation and authenticity.

Keywords: Najdi Al-Sadu art, artificial intelligence, women's and children's fashion, clothing designs

Procedia PDF Downloads 51
440 The Impact of CYP2C9 Gene Polymorphisms on Warfarin Dosing

Authors: Weaam Aldeeban, Majd Aljamali, Lama A. Youssef

Abstract:

Background & Objective: Warfarin is considered a problematic drug due to its narrow therapeutic window and wide inter-individual response variations, which are attributed to demographic, environmental, and genetic factors, particularly single nucleotide polymorphism (SNPs) in the genes encoding VKORC1 and CYP2C9 involved in warfarin's mechanism of action and metabolism, respectively. CYP2C9*2rs1799853 and CYP2C9*3rs1057910 alleles are linked to reduced enzyme activity, as carriers of either or both alleles are classified as moderate or slow metabolizers, and therefore exhibit higher sensitivity of warfarin compared with wild type (CYP2C9*1*1). Our study aimed to assess the frequency of *1, *2, and *3 alleles in the CYP2C9 gene in a cohort of Syrian patients receiving a maintenance dose of warfarin for different indications, the impact of genotypes on warfarin dosing, and the frequency of adverse effects (i.e., bleedings). Subjects & Methods: This retrospective cohort study encompassed 94 patients treated with warfarin. Patients’ genotypes were identified by sequencing the polymerase chain reaction (PCR) specific products of the gene encoding CYP2C9, and the effects on warfarin therapeutic outcomes were investigated. Results: Sequencing revealed that 43.6% of the study population has the *2 and/or *3 SNPs. The mean weekly maintenance dose of warfarin was 37.42 ± 15.5 mg for patients with the wild-type allele (CYP2C9*1*1), whereas patients with one or both variants (*2 and/or *3) demanded a significantly lower dose (28.59 ±11.58 mg) of warfarin, (P= 0.015). A higher percentage (40.7%) of patients with allele *2 and/or *3 experienced hemorrhagic accidents compared with only 17.9% of patients with the wild type *1*1, (P = 0.04). Conclusions: Our study proves an association between *2 and *3 genotypes and higher sensitivity to warfarin and a tendency to bleed, which necessitates lowering the dose. These findings emphasize the significance of CYP2C9 genotyping prior to commencing warfarin therapy in order to achieve optimal and faster dose control and to ensure effectiveness and safety.

Keywords: warfarin, CYP2C9, polymorphisms, Syrian, hemorrhage

Procedia PDF Downloads 132
439 Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks

Authors: Khairani Binti Supyan, Fatimah Khalid, Mas Rina Mustaffa, Azreen Bin Azman, Amirul Azuani Romle

Abstract:

Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification.

Keywords: perennial plants, image classification, deep neural networks, fine-tuning, transfer learning, VGG16, ResNet50, InceptionV3

Procedia PDF Downloads 42
438 Use of Numerical Tools Dedicated to Fire Safety Engineering for the Rolling Stock

Authors: Guillaume Craveur

Abstract:

This study shows the opportunity to use numerical tools dedicated to Fire Safety Engineering for the Rolling Stock. Indeed, some lawful requirements can now be demonstrated by using numerical tools. The first part of this study presents the use of modelling evacuation tool to satisfy the criteria of evacuation time for the rolling stock. The buildingEXODUS software is used to model and simulate the evacuation of rolling stock. Firstly, in order to demonstrate the reliability of this tool to calculate the complete evacuation time, a comparative study was achieved between a real test and simulations done with buildingEXODUS. Multiple simulations are performed to capture the stochastic variations in egress times. Then, a new study is done to calculate the complete evacuation time of a train with the same geometry but with a different interior architecture. The second part of this study shows some applications of Computational Fluid Dynamics. This work presents the approach of a multi scales validation of numerical simulations of standardized tests with Fire Dynamics Simulations software developed by the National Institute of Standards and Technology (NIST). This work highlights in first the cone calorimeter test, described in the standard ISO 5660, in order to characterize the fire reaction of materials. The aim of this process is to readjust measurement results from the cone calorimeter test in order to create a data set usable at the seat scale. In the second step, the modelisation concerns the fire seat test described in the standard EN 45545-2. The data set obtained thanks to the validation of the cone calorimeter test was set up in the fire seat test. To conclude with the third step, after controlled the data obtained for the seat from the cone calorimeter test, a larger scale simulation with a real part of train is achieved.

Keywords: fire safety engineering, numerical tools, rolling stock, multi-scales validation

Procedia PDF Downloads 295
437 Monitoring Spatial Distribution of Blue-Green Algae Blooms with Underwater Drones

Authors: R. L. P. De Lima, F. C. B. Boogaard, R. E. De Graaf-Van Dinther

Abstract:

Blue-green algae blooms (cyanobacteria) is currently a relevant ecological problem that is being addressed by most water authorities in the Netherlands. These can affect recreation areas by originating unpleasant smells and toxins that can poison humans and animals (e.g. fish, ducks, dogs). Contamination events usually take place during summer months, and their frequency is increasing with climate change. Traditional monitoring of this bacteria is expensive, labor-intensive and provides only limited (point sampling) information about the spatial distribution of algae concentrations. Recently, a novel handheld sensor allowed water authorities to quicken their algae surveying and alarm systems. This study converted the mentioned algae sensor into a mobile platform, by combining it with an underwater remotely operated vehicle (also equipped with other sensors and cameras). This provides a spatial visualization (mapping) of algae concentrations variations within the area covered with the drone, and also in depth. Measurements took place in different locations in the Netherlands: i) lake with thick silt layers at the bottom, very eutrophic former bottom of the sea and frequent / intense mowing regime; ii) outlet of waste water into large reservoir; iii) urban canal system. Results allowed to identify probable dominant causes of blooms (i), provide recommendations for the placement of an outlet, day-night differences in algae behavior (ii), or the highlight / pinpoint higher algae concentration areas (iii). Although further research is still needed to fully characterize these processes and to optimize the measuring tool (underwater drone developments / improvements), the method here presented can already provide valuable information about algae behavior and spatial / temporal variability and shows potential as an efficient monitoring system.

Keywords: blue-green algae, cyanobacteria, underwater drones / ROV / AUV, water quality monitoring

Procedia PDF Downloads 186
436 Investigation of Fumaric Acid Radiolysis Using Gamma Irradiation

Authors: Wafa Jahouach-Rabai, Khouloud Ouerghi, Zohra Azzouz-Berriche, Faouzi Hosni

Abstract:

Widely used organic products in the pharmaceutical industry have been detected in environmental systems, essentially carboxylic acids. In this purpose, the degradation efficiency of these contaminants was evaluated using an advanced oxidation process (AOP), namely ionization process as an alternative to conventional water treatment technologies. This process permitted the generation of radical reactions to directly degrade organic pollutants in wastewater. In fact, gamma irradiation of aqueous solutions produces several reactive radicals, essentially hydroxyl radical (OH), to destroy recalcitrant pollutants. Different concentrations of aqueous solutions of Fumaric acid (FA) were considered in this study (0.1-1 mmol/L), which were treated by irradiation doses from 1 to 15 kGy with 6.1 kGy/h rate by ionizing system in pilot scale (⁶⁰Co irradiator). Variations of main parameters influencing degradation efficiency versus absorbed doses were released in the aim to optimize total mineralization of considered pollutants. Preliminary degradation pathway until complete mineralization into CO₂ has been suggested based on detection of residual degradation derivatives using different techniques, namely high performance liquid chromatography (HPLC) and electron paramagnetic resonance spectroscopy (EPR). Results revealed total destruction of treated compound, which improve the efficiency of this process in water remediation. We investigated the reactivity of hydroxyl radicals generated by irradiation on dicarboxylic acid (FA) in aqueous solutions, leading to its degradation into other smaller molecules. In fact, gamma irradiation of FA leads to the formation of hydroxylated intermediates such as hydroxycarbonyl radical which were identified by EPR spectroscopy. Finally, pilot plant irradiation facilities improved the applicability of radiation technology on large scale.

Keywords: AOP, radiolysis, fumaric acid, gamma irradiation, hydroxyl radical, EPR, HPLC

Procedia PDF Downloads 151
435 Molecular Genetic Purity Test Using SSR Markers in Pigeon Pea

Authors: Rakesh C. Mathad, G. Y. Lokesh, Basavegowda

Abstract:

In agriculture using quality seeds of improved varieties is very important to ensure higher productivity thereby food security and sustainability. To ensure good productivity, seeds should have characters as described by the breeder. To know whether the characters as described by the breeder are expressing in a variety such as genuineness or genetic purity, field grow out test (GOT) is done. In pigeon pea which is long durational crop, conducting a GOT may take very long time and expensive also. Since in pigeon pea flower character is a most distinguishing character from the contaminants, conducting a field grow out test require 120-130 days or till flower emergence, which may increase cost of storage and seed production. This will also delay the distribution of seed inventory to the pigeon pea growing areas. In this view during 2014-15 with financial support of Govt. of Karnataka, India, a project to develop a molecular genetic test for newly developed variety of pigeon pea cv.TS3R was commissioned at Seed Unit, UAS, Raichur. A molecular test was developed with the help SSR markers to identify pure variety from possible off types in newly released pigeon pea variety TS3R. In the investigation, 44 primer pairs were screened to identify the specific marker associated with this variety. Pigeon pea cv. TS3R could be clearly identified by using the primer CCM 293 based on the banding pattern resolved on gel electrophoresis and PCR reactions. However some of the markers like AHSSR 46, CCM 82 and CCM 57 can be used to test other popular varieties in the region like Asha, GRG-811 and Maruti respectively. Further to develop this in to a lab test, the seed sample size was standardized to 200 seeds and a grow out matrix was developed. This matrix was used to sample 12 days old leaves to extract DNA. The lab test results were validated with actual field GOT test results and found variations within the acceptable limit of 1%. This molecular method can now be employed to test the genetic purity in pigeon pea cv TS3R which reduces the time and can be a cheaper alternative method for field GOT.

Keywords: genuineness, grow-out matrix, molecular genetic purity, SSR markers

Procedia PDF Downloads 263
434 Surprising Behaviour of Kaolinitic Soils under Alkaline Environment

Authors: P. Hari Prasad Reddy, Shimna Paulose, V. Sai Kumar, C. H. Rama Vara Prasad

Abstract:

Soil environment gets contaminated due to rapid industrialisation, agricultural-chemical application and improper disposal of waste generated by the society. Unexpected volume changes can occur in soil in the presence of certain contaminants usually after the long duration of interaction. Alkali is one of the major soil contaminant that has a considerable effect on behaviour of soils and capable of inducing swelling potential in soil. Chemical heaving of clayey soils occurs when they are wetted by aqueous solutions of alkalis. Mineralogical composition of the soil is one of the main factors influencing soil- alkali interaction. In the present work, studies are carried out to understand the swell potential of soils due to soil-alkali interaction with different concentrations of NaOH solution. Locally available soil, namely, red earth containing kaolinite which is of non-swelling nature is selected for the study. In addition to this, two commercially available clayey soils, namely ball clay and china clay containing mainly of kaolinite are selected to understand the effect of alkali interaction in various kaolinitic soils. Non-swelling red earth shows maximum swell at lower concentrations of alkali solution (0.1N) and a slightly decreasing trend of swelling with further increase in concentration (1N, 4N, and 8N). Marginal decrease in swell potential with increase in concentration indicates that the increased concentration of alkali solution exists as free solution in case of red earth. China clay and ball clay both falling under kaolinite group of clay minerals, show swelling with alkaline solution. At lower concentrations of alkali solution both the soils shows similar swell behaviour, but at higher concentration of alkali solution ball clay shows high swell potential compared to china clay which may be due to lack of well ordered crystallinity in ball clay compared to china clay. The variations in the results obtained were corroborated by carrying XRD and SEM studies.

Keywords: alkali, kaolinite, swell potential, XRD, SEM

Procedia PDF Downloads 480
433 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.

Keywords: agglomerate, blast furnace, permeability, softening-melting

Procedia PDF Downloads 236
432 The Contribution of Corpora to the Investigation of Cross-Linguistic Equivalence in Phraseology: A Contrastive Analysis of Russian and Italian Idioms

Authors: Federica Floridi

Abstract:

The long tradition of contrastive idiom research has essentially been focusing on three domains: the comparison of structural types of idioms (e.g. verbal idioms, idioms with noun-phrase structure, etc.), the description of idioms belonging to the same thematic groups (Sachgruppen), the identification of different types of cross-linguistic equivalents (i.e. full equivalents, partial equivalents, phraseological parallels, non-equivalents). The diastratic, diachronic and diatopic aspects of the compared idioms, as well as their syntactic, pragmatic and semantic properties, have been rather ignored. Corpora (both monolingual and parallel) give the opportunity to investigate the actual use of correlating idioms in authentic texts of L1 and L2. Adopting the corpus-based approach, it is possible to draw attention to the frequency of occurrence of idioms, their syntactic embedding, their potential syntactic transformations (e.g., nominalization, passivization, relativization, etc.), their combinatorial possibilities, the variations of their lexical structure, their connotations in terms of stylistic markedness or register. This paper aims to present the results of a contrastive analysis of Russian and Italian idioms referring to the concepts of ‘beginning’ and ‘end’, that has been carried out by using the Russian National Corpus and the ‘La Repubblica’ corpus. Beyond the digital corpora, bilingual dictionaries, like Skvorcova - Majzel’, Dobrovol’skaja, Kovalev, Čerdanceva, as well as monolingual resources, have been consulted. The study has shown that many of the idioms that have been traditionally indicated as cross-linguistic equivalents on bilingual dictionaries cannot be considered correspondents. The findings demonstrate that even those idioms, that are formally identical in Russian and Italian and are presumably derived from the same source (e.g., conceptual metaphor, Bible, classical mythology, World literature), exhibit differences regarding usage. The ultimate purpose of this article is to highlight that it is necessary to review and improve the existing bilingual dictionaries considering the empirical data collected in corpora. The materials gathered in this research can contribute to this sense.

Keywords: corpora, cross-linguistic equivalence, idioms, Italian, Russian

Procedia PDF Downloads 128
431 Gis-Based Water Pollution Assesment of Buriganga River, Bangladesh

Authors: Nur-E-Jannat Tinu

Abstract:

Water is absolutely vital not only for the survival of human beings but also for plants, animals, and all other living organisms. Water bodies, such as lakes, rivers, ponds, and estuaries, are the source of water supply in domestic, industrial, agriculture, and aquaculture purposes. The Buriganga River flows through the south and west of Dhaka city. The water quality of this river has become a matter of concern due to anthropogenic intervention of vital pollutants such as industrial effluents, urban sewage, and solid wastes in this area. Buriganga River is at risk to contamination from untreated municipal wastes, industrial discharges, runoff from organic and inorganic fertilizers, pesticides, insecticides, and oil emission around the river. The residential and commercial establishments along the river discharge wastewater either directly into the river or through drains and canals into the river. However, several regulatory measures and policies have been enforced by the Government to protect the river Buriganga from pollution, in most cases to no affect. Water quality assessment reveals that the water is also not appropriate for irrigation purposes. The physical parameters (pH, TDS, EC, Temperature, DO, COD, BOD) indicated that the water is too poor to be useable for agricultural, drinking, or other purposes. Chemical concentrations showed significant seasonal variations with high-level concentrations during the monsoon season, presumably due to extreme seasonal surface runoff. A comparative study of Electrical Conductivity (EC) and Total Dissolved Solids (TDS) indicated a considerable increase over the last five years A change in trend was observed from 2020 June-July, probably due to monsoon and post-monsoon. EC values decreased from 775 to 665 mmho/cm during this period. DO increased significantly from the mid-post-monsoon months to the early monsoon period. The pH value of river water is strongly alkaline, ranging between 6.5 and 7.79. This indicates that ecological organic compounds cause the water to become alkaline after the monsoon and monsoon seasons. As the water pollution level is very high, an effective remediation and pollution control plan should be considered.

Keywords: precipitation, spatial distribution, effluent, remediation

Procedia PDF Downloads 129
430 Use of Giant Magneto Resistance Sensors to Detect Micron to Submicron Biologic Objects

Authors: Manon Giraud, Francois-Damien Delapierre, Guenaelle Jasmin-Lebras, Cecile Feraudet-Tarisse, Stephanie Simon, Claude Fermon

Abstract:

Early diagnosis or detection of harmful substances at low level is a growing field of high interest. The ideal test should be cheap, easy to use, quick, reliable, specific, and with very low detection limit. Combining the high specificity of antibodies-functionalized magnetic beads used to immune-capture biologic objects and the high sensitivity of a GMR-based sensors, it is possible to even detect these biologic objects one by one, such as a cancerous cell, a bacteria or a disease biomarker. The simplicity of the detection process makes its use possible even for untrained staff. Giant Magneto Resistance (GMR) is a recently discovered effect consisting in the electrical resistance modification of some conductive layers when exposed to a magnetic field. This effect allows the detection of very low variations of magnetic field (typically a few tens of nanoTesla). Magnetic nanobeads coated with antibodies targeting the analytes are mixed with a biological sample (blood, saliva) and incubated for 45 min. Then the mixture is injected in a very simple microfluidic chip and circulates above a GMR sensor that detects changes in the surrounding magnetic field. Magnetic particles do not create a field sufficient to be detected. Therefore, only the biological objects surrounded by several antibodies-functionalized magnetic beads (that have been captured by the complementary antigens) are detected when they move above the sensor. Proof of concept has been carried out on NS1 mouse cancerous cells diluted in PBS which have been bonded to magnetic 200nm particles. Signals were detected in cells-containing samples while none were recorded for negative controls. Binary response was hence assessed for this first biological model. The precise quantification of the analytes and its detection in highly diluted solution is the step now in progress.

Keywords: early diagnosis, giant magnetoresistance, lab-on-a-chip, submicron particle

Procedia PDF Downloads 234
429 Experimental Study of Reflective Roof as a Passive Cooling Method in Homes Under the Paradigm of Appropriate Technology

Authors: Javier Ascanio Villabona, Brayan Eduardo Tarazona Romero, Camilo Leonardo Sandoval Rodriguez, Arly Dario Rincon, Omar Lengerke Perez

Abstract:

Efficient energy consumption in the housing sector in relation to refrigeration is a concern in the construction and rehabilitation of houses in tropical areas. Thermal comfort is aggravated by heat gain on the roof surface by heat gains. Thus, in the group of passive cooling techniques, one of the practices and technologies in solar control that provide improvements in comfortable conditions are thermal insulation or geometric changes of the roofs. On the other hand, methods with reflection and radiation are the methods used to decrease heat gain by facilitating the removal of excess heat inside a building to maintain a comfortable environment. Since the potential of these techniques varies in different climatic zones, their application in different zones should be examined. This research is based on the experimental study of a prototype of a roof radiator as a method of passive cooling in homes, which was developed through an experimental research methodology making measurements in a prototype built by means of the paradigm of appropriate technology, with the aim of establishing an initial behavior of the internal temperature resulting from the climate of the external environment. As a starting point, a selection matrix was made to identify the typologies of passive cooling systems to model the system and its subsequent implementation, establishing its constructive characteristics. Step followed by the measurement of the climatic variables (outside the prototype) and microclimatic variables (inside the prototype) to obtain a database to be analyzed. As a final result, the decrease in temperature that occurs inside the chamber with respect to the outside temperature was evidenced. likewise, a linearity in its behavior in relation to the variations of the climatic variables.

Keywords: appropriate technology, enveloping, energy efficiency, passive cooling

Procedia PDF Downloads 78
428 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 102
427 Real-Time Hybrid Simulation for a Tuned Liquid Column Damper Implementation

Authors: Carlos Riascos, Peter Thomson

Abstract:

Real-time hybrid simulation (RTHS) is a modern cyber-physical technique used for the experimental evaluation of complex systems, that treats the system components with predictable behavior as a numerical substructure and the components that are difficult to model as an experimental substructure. Therefore it is an attractive method for evaluation of the response of civil structures under earthquake, wind and anthropic loads. Another practical application of RTHS is the evaluation of control systems, as these devices are often nonlinear and their characterization is an important step in the design of controllers with the desired performance. In this paper, the response of three-story shear frame controlled by a tuned liquid column damper (TLCD) and subject to base excitation is considered. Both passive and semi-active control strategies were implemented and are compared. While the passive TLCD achieved a reduction of 50% in the acceleration response of the main structure in comparison with the structure without control, the semi-active TLCD achieved a reduction of 70%, and was robust to variations in the dynamic properties of the main structure. In addition, a RTHS was implemented with the main structure modeled as a linear, time-invariant (LTI) system through a state space representation and the TLCD, with both control strategies, was evaluated on a shake table that reproduced the displacement of the virtual structure. Current assessment measures for RTHS were used to quantify the performance with parameters such as generalized amplitude, equivalent time delay between the target and measured displacement of the shake table, and energy error using the measured force, and prove that the RTHS described in this paper is an accurate method for the experimental evaluation of structural control systems.

Keywords: structural control, hybrid simulation, tuned liquid column damper, semi-active sontrol strategy

Procedia PDF Downloads 284
426 Organic Geochemical Characteristics of Cenozoic Mudstones, NE Bengal Basin, Bangladesh

Authors: H. M. Zakir Hossain

Abstract:

Cenozoic mudstone samples, obtained from drilled cored and outcrop in northeastern Bengal Basin of Bangladesh were organic geochemically analyzed to identify vertical variations of organic facies, thermal maturity, hydrocarbon potential and depositional environments. Total organic carbon (TOC) content ranges from 0.11 to 1.56 wt% with an average of 0.43 wt%, indicating a good source rock potential. Total sulphur content is variable with values ranging from ~0.001 to 1.75 wt% with an average of 0.065 wt%. Rock-Eval S1 and S2 yields range from 0.03 to 0.14 mg HC/g rock and 0.01 to 0.66 mg HC/g rock, respectively. The hydrogen index values range from 2.71 to 56.09 mg HC/g TOC. These results revealed that the samples are dominated by type III kerogene. Tmax values of 426 to 453 °C and vitrinite reflectance of 0.51 to 0.66% indicate the organic matter is immature to mature. Saturated hydrocarbon ratios such as pristane, phytane, steranes, and hopanes, indicate mostly terrigenous organic matter with small influence of marine organic matter. Organic matter in the succession was accumulated in three different environmental conditions based on the integration of biomarker proxies. First phase (late Eocene to early Miocene): Deposition occurred entirely in seawater-dominated oxic conditions, with high inputs of land plants organic matter including angiosperms. Second phase (middle to late Miocene): Deposition occurred in freshwater-dominated anoxic conditions, with phytoplanktonic organic matter and a small influence of land plants. Third phase (late Miocene to Pleistocene): Deposition occurred in oxygen-poor freshwater conditions, with abundant input of planktonic organic matter and high influx of angiosperms. The lower part (middle Eocene to early Miocene) of the succession with moderate TOC contents and primarily terrestrial organic matter could have generated some condensates and oils in and around the study area.

Keywords: Bangladesh, geochemistry, hydrocarbon potential, mudstone

Procedia PDF Downloads 407
425 Root System Architecture Analysis of Sorghum Genotypes and Its Effect on Drought Adaptation

Authors: Hailemariam Solomon, Taye Tadesse, Daniel Nadew, Firezer Girma

Abstract:

Sorghum is an important crop in semi-arid regions and has shown resilience to drought stress. However, recurrent drought is affecting its productivity. Therefore, it is necessary to explore genes that contribute to drought stress adaptation to increase sorghum productivity. The aim of this study is to evaluate and determine the effect of root system traits, specifically root angle, on drought stress adaptation and grain yield performance in sorghum genotypes. A total of 428 sorghum genotypes from the Ethiopian breeding program were evaluated in three drought-stress environments. Field trials were conducted using a row-column design with three replications. Root system traits were phenotyped using a high-throughput phenotyping platform and analyzed using a row-column design with two replications. Data analysis was performed using R software and regression analysis. The study found significant variations in root system architecture among the sorghum genotypes. Non-stay-green genotypes had a grain yield ranging from 1.63 to 3.1 tons/ha, while stay-green genotypes had a grain yield ranging from 2.4 to 2.9 tons/ha. The analysis of root angle showed that non-stay-green genotypes had an angle ranging from 8.0 to 30.5 degrees, while stay-green genotypes had an angle ranging from 12.0 to 29.0 degrees. Improved varieties exhibited angles between 14.04 and 19.50 degrees. Positive and significant correlations were observed between leaf areas and shoot dry weight, as well as between leaf width and shoot dry weight. Negative correlations were observed between root angle and leaf area, as well as between root angle and root length. This research highlights the importance of root system architecture, particularly root angle traits, in enhancing grain yield production in drought-stressed conditions. It also establishes an association between root angle and grain yield traits for maximizing sorghum productivity.

Keywords: roor sysytem architecture, root angle, narrow root angle, wider root angle, drought

Procedia PDF Downloads 56
424 A Theoretical Study on Pain Assessment through Human Facial Expresion

Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee

Abstract:

A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.

Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)

Procedia PDF Downloads 326
423 Ponticuli of Atlas Vertebra: A Study in South Coastal Region of Andhra Pradesh

Authors: Hema Lattupalli

Abstract:

Introduction: A bony bridge extends from the lateral mass of the atlas to postero medial margin of vertebral artery groove, termed as a posterior bridge of atlas or posterior ponticulus. The foramen formed by the bridge is called as arcuate foramen or retroarticulare superior. Another bony bridge sometimes extends laterally from lateral mass to posterior root of transverse foramen forming and additional groove for vertebral artery, above and behind foramen transversarium called Lateral bridge or ponticulus lateralis. When both posterior and lateral are present together it is called as Posterolateral ponticuli. Aim and Objectives: The aim of the present study is to detect the presence of such Bridge or Ponticuli called as Lateral, Posterior and Posterolateral reported by earlier investigators in atlas vertebrae. Material and Methods: The study was done on 100 Atlas vertebrae from the Department of Anatomy Narayana Medical College Nellore, and also from SVIMS Tirupati was collected over a period of 2 years. The parameters that were studied include the presence of ponticuli, complete and incomplete and right and left side ponticuli. They were observed for all these parameters and the results were documented and photographed. Results: Ponticuli were observed in 25 (25%) of atlas vertebrae. Posterior ponticuli were found in 16 (16%), Lateral in 01 (01%) and Posterolateral in 08(08%) of the atlas vertebrae. Complete ponticuli were present in 09 (09%) and incomplete ponticuli in 16 (16%) of the atlas vertebrae. Bilateral ponticuli were seen in 10 (10%) and unilateral ponticuli were seen in 15 (15%) of the atlas vertebrae. Right side ponticuli were seen in 04 (04%) and Left side ponticuli in 05 (05%) of the atlas vertebrae respectively. Interpretation and Conclusion: In the present study posterior complete ponticuli were said to be more than the lateral complete ponticuli. The presence of Bilateral Incomplete Posterior ponticuli is higher and also Atlantic ponticuli. The present study is to say that knowledge of normal anatomy and variations in the atlas vertebra is very much essential to the neurosurgeons giving a message that utmost care is needed to perform surgeries related to craniovertebral regions. This is additional information to the Anatomists, Neurosurgeons and Radiologist. This adds an extra page to the literature.

Keywords: atlas vertebra, ponticuli, posterior arch, arcuate foramen

Procedia PDF Downloads 355
422 Portuguese Guitar Strings Characterization and Comparison

Authors: P. Serrão, E. Costa, A. Ribeiro, V. Infante

Abstract:

The characteristic sonority of the Portuguese guitar is in great part what makes Fado so distinguishable from other traditional song styles. The Portuguese guitar is a pear-shaped plucked chordophone with six courses of double strings. This study compares the two types of plain strings available for Portuguese guitar and used by the musicians. One is stainless steel spring wire, the other is high carbon spring steel (music wire). Some musicians mention noticeable differences in sound quality between these two string materials, such as a little more brightness and sustain in the steel strings. Experimental tests were performed to characterize string tension at pitch; mechanical strength and tuning stability using the universal testing machine; dimensional control and chemical composition analysis using the scanning electron microscope. The string dynamical behaviour characterization experiments, including frequency response, inharmonicity, transient response, damping phenomena and were made in a monochord test set-up designed and built in-house. Damping factor was determined for the fundamental frequency. As musicians are able to detect very small damping differences, an accurate a characterization of the damping phenomena for all harmonics was necessary. With that purpose, another improved monochord was set and a new system identification methodology applied. Due to the complexity of this task several adjustments were necessary until obtaining good experimental data. In a few cases, dynamical tests were repeated to detect any evolution in damping parameters after break-in period when according to players experience a new string sounds gradually less dull until reaching the typically brilliant timbre. Finally, each set of strings was played on one guitar by a distinguished player and recorded. The recordings which include individual notes, scales, chords and a study piece, will be analysed to potentially characterize timbre variations.

Keywords: damping factor, music wire, portuguese guitar, string dynamics

Procedia PDF Downloads 536
421 Education-based, Graphical User Interface Design for Analyzing Phase Winding Inter-Turn Faults in Permanent Magnet Synchronous Motors

Authors: Emir Alaca, Hasbi Apaydin, Rohullah Rahmatullah, Necibe Fusun Oyman Serteller

Abstract:

In recent years, Permanent Magnet Synchronous Motors (PMSMs) have found extensive applications in various industrial sectors, including electric vehicles, wind turbines, and robotics, due to their high performance and low losses. Accurate mathematical modeling of PMSMs is crucial for advanced studies in electric machines. To enhance the effectiveness of graduate-level education, incorporating virtual or real experiments becomes essential to reinforce acquired knowledge. Virtual laboratories have gained popularity as cost-effective alternatives to physical testing, mitigating the risks associated with electrical machine experiments. This study presents a MATLAB-based Graphical User Interface (GUI) for PMSMs. The GUI offers a visual interface that allows users to observe variations in motor outputs corresponding to different input parameters. It enables users to explore healthy motor conditions and the effects of short-circuit faults in the one-phase winding. Additionally, the interface includes menus through which users can access equivalent circuits related to the motor and gain hands-on experience with the mathematical equations used in synchronous motor calculations. The primary objective of this paper is to enhance the learning experience of graduate and doctoral students by providing a GUI-based approach in laboratory studies. This interactive platform empowers students to examine and analyze motor outputs by manipulating input parameters, facilitating a deeper understanding of PMSM operation and control.

Keywords: magnet synchronous motor, mathematical modelling, education tools, winding inter-turn fault

Procedia PDF Downloads 38
420 Characteristics of Aerosols Properties Over Different Desert-Influenced Aeronet Sites

Authors: Abou Bakr Merdji, Alaa Mhawish, Xiaofeng Xu, Chunsong Lu

Abstract:

The characteristics of optical and microphysical properties of aerosols near deserts are analyzed using 11 AErosol RObotic NETwork (AERONET) sites located in 6 major desert areas (the Sahara, Arabia, Thar, Karakum, Taklamakan, and Gobi) between 1998 and 2021. The regional mean of Aerosol Optical Depth (AOD) (coarse AOD (CAOD)) are 0.44 (0.187), 0.38 (0.26), 0.35 (0.24), 0.23 (0.11), 0.20 (0.14), 0.10 (0.05) in the Thar, Arabian, Sahara, Karakum, Taklamakan and Gobi Deserts respectively, while an opposite for AE and Fine Mode Fraction (FMF). Higher extinctions are associated with larger particles (dust) over all the main desert regions. This is shown by the almost inversely proportional variations of AOD and CAOD compared with AE and FMF. Coarse particles contribute the most to the total AOD over the Sahara Desert compared to those in the other deserts all year round. Related to the seasonality of dust events, the maximum AOD (CAOD) generally appears in summer and spring, while the minimum is in winter. The mean values of absorbing AOD (AAOD), Absorbing AE (AAE), and the Single Scattering Albedo (SSA) for all sites ranged from 0.017 to 0.037, from 1.16 to 2.81 and from 0.844 to 0.944, respectively. Generally, the highest absorbing aerosol load are observed over the Thar, followed by the Karakum, the Sahara, the Gobi, and then the Taklamakan Deserts, while the largest absorbing particles are observed in the Sahara followed by Arabia, Thar, Karakum, Gobi, and the smallest over the Taklamakan Desert. Similar absorption qualities are observed over the Sahara, Arabia, Thar, and Karakum Deserts, with SSA values varying between 0.90 and 0.91, whereas the most and least absorbing particles are observed at the Taklamakan and the Gobi Deserts, respectively. The seasonal AAODs are distinctly different over the deserts, with parts of Sahara and Arabia, and the Dalanzadgad sites experiencing the maximum in summer, the Southern Sahara, Western Arabia, Jaipur, and Dushanbe in winter, while the Eastern Arabia and the Muztagh Ata in autumn. AAOD and SSA spectra are consistent with dust-dominated conditions that resulted from aerosol typing (dust and polluted dust) at most deserts, with a possible presence of other absorbing particles apart from dust at Arabia, the Taklamakan, and the Gobi Desert sites.

Keywords: sahara, AERONET, desert, dust belt, aerosols, optical properties

Procedia PDF Downloads 72
419 Microbial Phylogenetic Divergence between Surface-Water and Sedimentary Ecosystems Drove the Resistome Profiles

Authors: Okugbe Ebiotubo Ohore, Jingli Zhang, Binessi Edouard Ifon, Mathieu Nsenga Kumwimba, Xiaoying Mu, Dai Kuang, Zhen Wang, Ji-Dong Gu, Guojing Yang

Abstract:

Antibiotic pollution and the evolution of antibiotic resistance genes (ARGs) are increasingly viewed as major threats to both ecosystem security and human health, and has drawn attention. This study investigated the fate of antibiotics in aqueous and sedimentary substrates and the impact of ecosystem shifts between water and sedimentary phases on resistome profiles. The findings indicated notable variations in the concentration and distribution patterns of antibiotics across various environmental phases. Based on the partition coefficient (Kd), the total antibiotic concentration was significantly greater in the surface water (1405.45 ng/L; 49.5%) compared to the suspended particulate matter (Kd =0.64; 892.59 ng/g; 31.4%) and sediment (Kd=0.4; 542.64 ng/g; 19.1%). However, the relative abundance of ARGs in surface water and sediment was disproportionate to the abundance of antibiotics concentration, and sediments were the predominant ARGs reservoirs. Phylogenetic divergence of the microbial communities between the surface water and the sedimentary ecosystems potentially played important roles in driving the ARGs profiles between the two distinctive ecosystems. ARGs of Clinical importance; including blaGES, MCR-7.1, ermB, tet(34), tet36, tetG-01, and sul2 were significantly increased in the surface water, while blaCTX-M-01, blaTEM, blaOXA10-01, blaVIM, tet(W/N/W), tetM02, and ermX were amplified in the sediments. cfxA was an endemic ARG in surface-water ecosystems while the endemic ARGs of the sedimentary ecosystems included aacC4, aadA9-02, blaCTX-M-04, blaIMP-01, blaIMP-02, bla-L1, penA, erm(36), ermC, ermT-01, msrA-01, pikR2, vgb-01, mexA, oprD, ttgB, and aac. These findings offer a valuable information for the identification of ARGs-specific high-risk reservoirs.

Keywords: antibiotic resistance genes, microbial diversity, suspended particulate matter, sediment, surface water

Procedia PDF Downloads 13
418 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 38
417 Improved Clothing Durability as a Lifespan Extension Strategy: A Framework for Measuring Clothing Durability

Authors: Kate E Morris, Mark Sumner, Mark Taylor, Amanda Joynes, Yue Guo

Abstract:

Garment durability, which encompasses physical and emotional factors, has been identified as a critical ingredient in producing clothing with increased lifespans, battling overconsumption, and subsequently tackling the catastrophic effects of climate change. Eco-design for Sustainable Products Regulation (ESPR) and Extended Producer Responsibility (EPR) schemes have been suggested and will be implemented across Europe and the UK which might require brands to declare a garment’s durability credentials to be able to sell in that market. There is currently no consistent method of measuring the overall durability of a garment. Measuring the physical durability of garments is difficult and current assessment methods lack objectivity and reliability or don’t reflect the complex nature of durability for different garment categories. This study presents a novel and reproducible methodology for testing and ranking the absolute durability of 5 commercially available garment types, Formal Trousers, Casual Trousers, Denim Jeans, Casual Leggings and Underwear. A total of 112 garments from 21 UK brands were assessed. Due to variations in end use, different factors were considered across the different garment categories when evaluating durability. A physical testing protocol was created, tailored to each category, to dictate the necessary test results needed to measure the absolute durability of the garments. Multiple durability factors were used to modulate the ranking as opposed to previous studies which only reported on single factors to evaluate durability. The garments in this study were donated by the signatories of the Waste Resource Action Programme’s (WRAP) Textile 2030 initiative as part of their strategy to reduce the environmental impact of UK fashion. This methodology presents a consistent system for brands and policymakers to follow to measure and rank various garment type’s physical durability. Furthermore, with such a methodology, the durability of garments can be measured and new standards for improving durability can be created to enhance utilisation and improve the sustainability of the clothing on the market.

Keywords: circularity, durability, garment testing, ranking

Procedia PDF Downloads 12
416 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 69
415 Variations in Heat and Cold Waves over Southern India

Authors: Amit G. Dhorde

Abstract:

It is now well established that the global surface air temperatures have increased significantly during the period that followed the industrial revolution. One of the main predictions of climate change is that the occurrences of extreme weather events will increase in future. In many regions of the world, high-temperature extremes have already started occurring with rising frequency. The main objective of the present study is to understand spatial and temporal changes in days with heat and cold wave conditions over southern India. The study area includes the region of India that lies to the south of Tropic of Cancer. To fulfill the objective, daily maximum and minimum temperature data for 80 stations were collected for the period 1969-2006 from National Data Center of India Meteorological Department. After assessing the homogeneity of data, 62 stations were finally selected for the study. Heat and cold waves were classified as slight, moderate and severe based on the criteria given by Indias' meteorological department. For every year, numbers of days experiencing heat and cold wave conditions were computed. This data was analyzed with linear regression to find any existing trend. Further, the time period was divided into four decades to investigate the decadal frequency of the occurrence of heat and cold waves. The results revealed that the average annual temperature over southern India shows an increasing trend, which signifies warming over this area. Further, slight cold waves during winter season have been decreasing at the majority of the stations. The moderate cold waves also show a similar pattern at the majority of the stations. This is an indication of warming winters over the region. Besides this analysis, other extreme indices were also analyzed such as extremely hot days, hot days, very cold nights, cold nights, etc. This analysis revealed that nights are becoming warmer and days are getting warmer over some regions too.

Keywords: heat wave, cold wave, southern India, decadal frequency

Procedia PDF Downloads 119
414 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 103
413 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2

Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle

Abstract:

With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.

Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis

Procedia PDF Downloads 58
412 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach

Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong

Abstract:

Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.

Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach

Procedia PDF Downloads 378