Search results for: Dykstra-Parsons permeability variations (V)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2086

Search results for: Dykstra-Parsons permeability variations (V)

466 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 55
465 Improved Clothing Durability as a Lifespan Extension Strategy: A Framework for Measuring Clothing Durability

Authors: Kate E Morris, Mark Sumner, Mark Taylor, Amanda Joynes, Yue Guo

Abstract:

Garment durability, which encompasses physical and emotional factors, has been identified as a critical ingredient in producing clothing with increased lifespans, battling overconsumption, and subsequently tackling the catastrophic effects of climate change. Eco-design for Sustainable Products Regulation (ESPR) and Extended Producer Responsibility (EPR) schemes have been suggested and will be implemented across Europe and the UK which might require brands to declare a garment’s durability credentials to be able to sell in that market. There is currently no consistent method of measuring the overall durability of a garment. Measuring the physical durability of garments is difficult and current assessment methods lack objectivity and reliability or don’t reflect the complex nature of durability for different garment categories. This study presents a novel and reproducible methodology for testing and ranking the absolute durability of 5 commercially available garment types, Formal Trousers, Casual Trousers, Denim Jeans, Casual Leggings and Underwear. A total of 112 garments from 21 UK brands were assessed. Due to variations in end use, different factors were considered across the different garment categories when evaluating durability. A physical testing protocol was created, tailored to each category, to dictate the necessary test results needed to measure the absolute durability of the garments. Multiple durability factors were used to modulate the ranking as opposed to previous studies which only reported on single factors to evaluate durability. The garments in this study were donated by the signatories of the Waste Resource Action Programme’s (WRAP) Textile 2030 initiative as part of their strategy to reduce the environmental impact of UK fashion. This methodology presents a consistent system for brands and policymakers to follow to measure and rank various garment type’s physical durability. Furthermore, with such a methodology, the durability of garments can be measured and new standards for improving durability can be created to enhance utilisation and improve the sustainability of the clothing on the market.

Keywords: circularity, durability, garment testing, ranking

Procedia PDF Downloads 35
464 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 89
463 Variations in Heat and Cold Waves over Southern India

Authors: Amit G. Dhorde

Abstract:

It is now well established that the global surface air temperatures have increased significantly during the period that followed the industrial revolution. One of the main predictions of climate change is that the occurrences of extreme weather events will increase in future. In many regions of the world, high-temperature extremes have already started occurring with rising frequency. The main objective of the present study is to understand spatial and temporal changes in days with heat and cold wave conditions over southern India. The study area includes the region of India that lies to the south of Tropic of Cancer. To fulfill the objective, daily maximum and minimum temperature data for 80 stations were collected for the period 1969-2006 from National Data Center of India Meteorological Department. After assessing the homogeneity of data, 62 stations were finally selected for the study. Heat and cold waves were classified as slight, moderate and severe based on the criteria given by Indias' meteorological department. For every year, numbers of days experiencing heat and cold wave conditions were computed. This data was analyzed with linear regression to find any existing trend. Further, the time period was divided into four decades to investigate the decadal frequency of the occurrence of heat and cold waves. The results revealed that the average annual temperature over southern India shows an increasing trend, which signifies warming over this area. Further, slight cold waves during winter season have been decreasing at the majority of the stations. The moderate cold waves also show a similar pattern at the majority of the stations. This is an indication of warming winters over the region. Besides this analysis, other extreme indices were also analyzed such as extremely hot days, hot days, very cold nights, cold nights, etc. This analysis revealed that nights are becoming warmer and days are getting warmer over some regions too.

Keywords: heat wave, cold wave, southern India, decadal frequency

Procedia PDF Downloads 128
462 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 127
461 Characterising Indigenous Chicken (Gallus gallus domesticus) Ecotypes of Tigray, Ethiopia: A Combined Approach Using Ecological Niche Modelling and Phenotypic Distribution Modelling

Authors: Gebreslassie Gebru, Gurja Belay, Minister Birhanie, Mulalem Zenebe, Tadelle Dessie, Adriana Vallejo-Trujillo, Olivier Hanotte

Abstract:

Livestock must adapt to changing environmental conditions, which can result in either phenotypic plasticity or irreversible phenotypic change. In this study, we combine Ecological Niche Modelling (ENM) and Phenotypic Distribution Modelling (PDM) to provide a comprehensive framework for understanding the ecological and phenotypic characteristics of indigenous chicken (Gallus gallus domesticus) ecotypes. This approach helped us to classify these ecotypes, differentiate their phenotypic traits, and identify associations between environmental variables and adaptive traits. We measured 297 adult indigenous chickens from various agro-ecologies, including 208 females and 89 males. A subset of the 22 measured traits was selected using stepwise selection, resulting in seven traits for each sex. Using ENM, we identified four agro-ecologies potentially harbouring distinct phenotypes of indigenous Tigray chickens. However, PDM classified these chickens into three phenotypical ecotypes. Chickens grouped in ecotype-1 and ecotype-3 exhibited superior adaptive traits compared to those in ecotype-2, with significant variance observed. This high variance suggests a broader range of trait expression within these ecotypes, indicating greater adaptation capacity and potentially more diverse genetic characteristics. Several environmental variables, such as soil clay content, forest cover, and mean temperature of the wettest quarter, were strongly associated with most phenotypic traits. This suggests that these environmental factors play a role in shaping the observed phenotypic variations. By integrating ENM and PDM, this study enhances our understanding of indigenous chickens' ecological and phenotypic diversity. It also provides valuable insights into their conservation and management in response to environmental changes.

Keywords: adaptive traits, agro-ecology, appendage, climate, environment, imagej, morphology, phenotypic variation

Procedia PDF Downloads 32
460 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2

Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle

Abstract:

With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.

Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis

Procedia PDF Downloads 72
459 Design Approach to Incorporate Unique Performance Characteristics of Special Concrete

Authors: Devendra Kumar Pandey, Debabrata Chakraborty

Abstract:

The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.

Keywords: high performance concrete, special concrete, structural design, structural lightweight concrete

Procedia PDF Downloads 305
458 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach

Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong

Abstract:

Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.

Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach

Procedia PDF Downloads 396
457 Glaucoma with Normal IOP, Is It True Normal Tension glaucoma or Something Else!

Authors: Sushma Tejwani, Shoruba Dinakaran, Kushal Kacha, K. Bhujang Shetty

Abstract:

Introduction and aim: It is not unusual to find patients with glaucomatous damage and normal intraocular pressure, and to label a patient as Normal tension glaucoma (NTG) majority of clinicians depend on office Intraocular pressures (IOP) recordings; hence, the concern is that whether we are missing the late night or early morning spikes in this group of patients. Also, ischemia to the optic nerve is one of the presumed causes of damage in these patients, however demonstrating the same has been a challenge. The aim of this study was to evaluate IOP variations and patterns in a series of patients with open angles, glaucomatous discs or fields but normal office IOP, and in addition to identify ischemic factors for true NTG patients. Materials & Methods: This was an observational cross- sectional study from a tertiary care centre. The patients that underwent full day DVT from Jan 2012 to April 2014 were studied. All patients underwent IOP measurement on Goldmann applanation tonometry every 3 hours for 24 hours along with a recording of the blood pressure (BP). Further patients with normal IOP throughout the 24- hour period were evaluated with a cardiologist for echocardiography and carotid Doppler. Results: There were 47 patients and a maximum number of patients studied was in the age group of 50-70 years. A biphasic IOP peak was noted for almost all the patients. Out of the 47 patients, 2 were excluded from analysis as they were on treatment.20 patients (42%) were diagnosed on DVT to have an IOP spike and were then diagnosed as open angle glaucoma and another 25 (55%) were diagnosed to have normal tension glaucoma and were subsequently advised a carotid Doppler and a cardiologists consult. Another interesting finding was that 9 patients had a nocturnal dip in their BP and 3 were found to have carotid artery stenosis. Conclusion: A continuous 24-hour monitoring of the IOP and BP is a very useful albeit mildly cumbersome tool which provides a wealth of information in cases of glaucoma presenting with normal office pressures. It is of great value in differentiating between normal tension glaucoma patients & open angle glaucoma patients. It also helps in timely diagnosis & possible intervention due to referral to a cardiologist in cases of carotid artery stenosis.

Keywords: carotid artery disease in NTG, diurnal variation of IOP, ischemia in glaucoma, normal tension glaucoma

Procedia PDF Downloads 285
456 The Potential of Potato and Maize Based Snacks as Fire Accelerants

Authors: E. Duffin, L. Brownlow

Abstract:

Arson is a crime which can provide exceptional problems to forensic specialists. Its destructive nature makes evidence much harder to find, especially when used to cover up another crime. There is a consistent potential threat of arsonists seeking new and easier ways to set fires. Existing research in this field primarily focuses on the use of accelerants such as petrol, with less attention to other more accessible and harder to detect materials. This includes the growing speculation of potato and maize-based snacks being used as fire accelerants. It was hypothesized that all ‘crisp-type’ snacks in foil packaging had the potential to act as accelerants and would burn readily in the various experiments. To test this hypothesis, a series of small lab-based experiments were undertaken, igniting samples of the snacks. Factors such as ingredients, shape, packaging and calorific value were all taken into consideration. The time (in seconds) spent on fire by the individual snacks was recorded. It was found that all of the snacks tested burnt for statistically similar amounts of time with a p-value of 0.0157. This was followed with a large mock real-life scenario using packets of crisps on fire and car seats to investigate as to the possibility of these snacks being verifiable tools to the arsonist. Here, three full packets of crisps were selected based on variations in burning during the lab experiments. They were each lit with a lighter to initiate burning, then placed onto a car seat to be timed and observed with video cameras. In all three cases, the fire was significant and sustained by the 200-second mark. On the basis of this data, it was concluded that potato and maize-based snacks were viable accelerants of fire. They remain an effective method of starting fires whilst being cheap, accessible, non-suspicious and non-detectable. The results produced supported the hypothesis that all ‘crisp-type’ snacks in foil packaging (that had been tested) had the potential to act as accelerants and would burn readily in the various experiments. This study serves to raise awareness and provide a basis for research and prevention of arson regarding maize and potato-based snacks as fire accelerants.

Keywords: arson, crisps, fires, food

Procedia PDF Downloads 121
455 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 106
454 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses

Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh

Abstract:

Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.

Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications

Procedia PDF Downloads 317
453 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices

Authors: P. Caimmi, E. Bele, A. Abolfathi

Abstract:

Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.

Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis

Procedia PDF Downloads 122
452 Nonstationary Modeling of Extreme Precipitation in the Wei River Basin, China

Authors: Yiyuan Tao

Abstract:

Under the impact of global warming together with the intensification of human activities, the hydrological regimes may be altered, and the traditional stationary assumption was no longer satisfied. However, most of the current design standards of water infrastructures were still based on the hypothesis of stationarity, which may inevitably result in severe biases. Many critical impacts of climate on ecosystems, society, and the economy are controlled by extreme events rather than mean values. Therefore, it is of great significance to identify the non-stationarity of precipitation extremes and model the precipitation extremes in a nonstationary framework. The Wei River Basin (WRB), located in a continental monsoon climate zone in China, is selected as a case study in this study. Six extreme precipitation indices were employed to investigate the changing patterns and stationarity of precipitation extremes in the WRB. To identify if precipitation extremes are stationary, the Mann-Kendall trend test and the Pettitt test, which is used to examine the occurrence of abrupt changes are adopted in this study. Extreme precipitation indices series are fitted with non-stationary distributions that selected from six widely used distribution functions: Gumbel, lognormal, Weibull, gamma, generalized gamma and exponential distributions by means of the time-varying moments model generalized additive models for location, scale and shape (GAMLSS), where the distribution parameters are defined as a function of time. The results indicate that: (1) the trends were not significant for the whole WRB, but significant positive/negative trends were still observed in some stations, abrupt changes for consecutive wet days (CWD) mainly occurred in 1985, and the assumption of stationarity is invalid for some stations; (2) for these nonstationary extreme precipitation indices series with significant positive/negative trends, the GAMLSS models are able to capture well the temporal variations of the indices, and perform better than the stationary model. Finally, the differences between the quantiles of nonstationary and stationary models are analyzed, which highlight the importance of nonstationary modeling of precipitation extremes in the WRB.

Keywords: extreme precipitation, GAMLSSS, non-stationary, Wei River Basin

Procedia PDF Downloads 124
451 Power Performance Improvement of 500W Vertical Axis Wind Turbine with Salient Design Parameters

Authors: Young-Tae Lee, Hee-Chang Lim

Abstract:

This paper presents the performance characteristics of Darrieus-type vertical axis wind turbine (VAWT) with NACA airfoil blades. The performance of Darrieus-type VAWT can be characterized by torque and power. There are various parameters affecting the performance such as chord length, helical angle, pitch angle and rotor diameter. To estimate the optimum shape of Darrieustype wind turbine in accordance with various design parameters, we examined aerodynamic characteristics and separated flow occurring in the vicinity of blade, interaction between flow and blade, and torque and power characteristics derived from it. For flow analysis, flow variations were investigated based on the unsteady RANS (Reynolds-averaged Navier-Stokes) equation. Sliding mesh algorithm was employed in order to consider rotational effect of blade. To obtain more realistic results we conducted experiment and numerical analysis at the same time for three-dimensional shape. In addition, several parameters (chord length, rotor diameter, pitch angle, and helical angle) were considered to find out optimum shape design and characteristics of interaction with ambient flow. Since the NACA airfoil used in this study showed significant changes in magnitude of lift and drag depending on an angle of attack, the rotor with low drag, long cord length and short diameter shows high power coefficient in low tip speed ratio (TSR) range. On the contrary, in high TSR range, drag becomes high. Hence, the short-chord and long-diameter rotor produces high power coefficient. When a pitch angle at which airfoil directs toward inside equals to -2° and helical angle equals to 0°, Darrieus-type VAWT generates maximum power.

Keywords: darrieus wind turbine, VAWT, NACA airfoil, performance

Procedia PDF Downloads 373
450 Assessment of Spatial and Temporal Variations of Some Biological Water Quality Parameters in Mat River, Albania

Authors: Etleva Hamzaraj, Eva Kica, Anila Paparisto, Pranvera Lazo

Abstract:

Worldwide demographic developments of recent decades have been associated with negative environmental consequences. For this reason, there is a growing interest in assessing the state of natural ecosystems or assessing human impact on them. In this respect, this study aims to evaluate the change in water quality of the Mat River for a period of about ten years to highlight human impact. In one year, period of study, several biological and environmental parameters are determined to evaluate river water quality, and the data collected are compared with those of a similar study in 2007. Samples are collected every month in five stations evenly distributed along the river. Total coliform bacteria, the number of heterotrophic bacteria in water, and benthic macroinvertebrates are used as biological parameters of water quality. The most probable number index is used for evaluation of total coliform bacteria in water, while the number of heterotrophic bacteria is determined by counting colonies on plates with Plate Count Agar, cultivated with 0.1 ml sample after series dilutions. Benthic macroinvertebrates are analyzed by the number of individuals per taxa, the value of biotic index, EPT Richness Index value and tolerance value. Environmental parameters like pH, temperature, and electrical conductivity are measured onsite. As expected, the bacterial load was higher near urban areas, and the pollution increased with the course of the river. The maximum concentration of fecal coliforms was 1100 MPN/100 ml in summer and near the most urbanized area of the river. The data collected during this study show that after about ten years, there is a change in water quality of Mat River. According to a similar study carried out in 2007, the water of Mat River was of ‘excellent’ quality. But, according to this study, the water was classified as of ‘excellent’ quality only in one sampling site, near river source, while in all other stations was of ‘good’ quality. This result is based on biological and environmental parameters measured. The human impact on the quality of water of Mat River is more than evident.

Keywords: water quality, coliform bacteria, MPN index, benthic macroinvertebrates, biotic index

Procedia PDF Downloads 118
449 Engineering of Reagentless Fluorescence Biosensors Based on Single-Chain Antibody Fragments

Authors: Christian Fercher, Jiaul Islam, Simon R. Corrie

Abstract:

Fluorescence-based immunodiagnostics are an emerging field in biosensor development and exhibit several advantages over traditional detection methods. While various affinity biosensors have been developed to generate a fluorescence signal upon sensing varying concentrations of analytes, reagentless, reversible, and continuous monitoring of complex biological samples remains challenging. Here, we aimed to genetically engineer biosensors based on single-chain antibody fragments (scFv) that are site-specifically labeled with environmentally sensitive fluorescent unnatural amino acids (UAA). A rational design approach resulted in quantifiable analyte-dependent changes in peak fluorescence emission wavelength and enabled antigen detection in vitro. Incorporation of a polarity indicator within the topological neighborhood of the antigen-binding interface generated a titratable wavelength blueshift with nanomolar detection limits. In order to ensure continuous analyte monitoring, scFv candidates with fast binding and dissociation kinetics were selected from a genetic library employing a high-throughput phage display and affinity screening approach. Initial rankings were further refined towards rapid dissociation kinetics using bio-layer interferometry (BLI) and surface plasmon resonance (SPR). The most promising candidates were expressed, purified to homogeneity, and tested for their potential to detect biomarkers in a continuous microfluidic-based assay. Variations of dissociation kinetics within an order of magnitude were achieved without compromising the specificity of the antibody fragments. This approach is generally applicable to numerous antibody/antigen combinations and currently awaits integration in a wide range of assay platforms for one-step protein quantification.

Keywords: antibody engineering, biosensor, phage display, unnatural amino acids

Procedia PDF Downloads 146
448 Sliding Mode Power System Stabilizer for Synchronous Generator Stability Improvement

Authors: J. Ritonja, R. Brezovnik, M. Petrun, B. Polajžer

Abstract:

Many modern synchronous generators in power systems are extremely weakly damped. The reasons are cost optimization of the machine building and introduction of the additional control equipment into power systems. Oscillations of the synchronous generators and related stability problems of the power systems are harmful and can lead to failures in operation and to damages. The only useful solution to increase damping of the unwanted oscillations represents the implementation of the power system stabilizers. Power system stabilizers generate the additional control signal which changes synchronous generator field excitation voltage. Modern power system stabilizers are integrated into static excitation systems of the synchronous generators. Available commercial power system stabilizers are based on linear control theory. Due to the nonlinear dynamics of the synchronous generator, current stabilizers do not assure optimal damping of the synchronous generator’s oscillations in the entire operating range. For that reason the use of the robust power system stabilizers which are convenient for the entire operating range is reasonable. There are numerous robust techniques applicable for the power system stabilizers. In this paper the use of sliding mode control for synchronous generator stability improvement is studied. On the basis of the sliding mode theory, the robust power system stabilizer was developed. The main advantages of the sliding mode controller are simple realization of the control algorithm, robustness to parameter variations and elimination of disturbances. The advantage of the proposed sliding mode controller against conventional linear controller was tested for damping of the synchronous generator oscillations in the entire operating range. Obtained results show the improved damping in the entire operating range of the synchronous generator and the increase of the power system stability. The proposed study contributes to the progress in the development of the advanced stabilizer, which will replace conventional linear stabilizers and improve damping of the synchronous generators.

Keywords: control theory, power system stabilizer, robust control, sliding mode control, stability, synchronous generator

Procedia PDF Downloads 224
447 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 376
446 Effects of Four Dietary Oils on Cholesterol and Fatty Acid Composition of Egg Yolk in Layers

Authors: A. F. Agboola, B. R. O. Omidiwura, A. Oyeyemi, E. A. Iyayi, A. S. Adelani

Abstract:

Dietary cholesterol has elicited the most public interest as it relates with coronary heart disease. Thus, humans have been paying more attention to health, thereby reducing consumption of cholesterol enriched food. Egg is considered as one of the major sources of human dietary cholesterol. However, an alternative way to reduce the potential cholesterolemic effect of eggs is to modify the fatty acid composition of the yolk. The effect of palm oil (PO), soybean oil (SO), sesame seed oil (SSO) and fish oil (FO) supplementation in the diets of layers on egg yolk fatty acid, cholesterol, egg production and egg quality parameters were evaluated in a 42-day feeding trial. One hundred and five Isa Brown laying hens of 34 weeks of age were randomly distributed into seven groups of five replicates and three birds per replicate in a completely randomized design. Seven corn-soybean basal diets (BD) were formulated: BD+No oil (T1), BD+1.5% PO (T2), BD+1.5% SO (T3), BD+1.5% SSO (T4), BD+1.5% FO (T5), BD+0.75% SO+0.75% FO (T6) and BD+0.75% SSO+0.75% FO (T7). Five eggs were randomly sampled at day 42 from each replicate to assay for the cholesterol, fatty acid profile of egg yolk and egg quality assessment. Results showed that there were no significant (P>0.05) differences observed in production performance, egg cholesterol and egg quality parameters except for yolk height, albumen height, yolk index, egg shape index, haugh unit, and yolk colour. There were no significant differences (P>0.05) observed in total cholesterol, high density lipoprotein and low density lipoprotein levels of egg yolk across the treatments. However, diets had effect (P<0.05) on TAG (triacylglycerol) and VLDL (very low density lipoprotein) of the egg yolk. The highest TAG (603.78 mg/dl) and VLDL values (120.76 mg/dl) were recorded in eggs of hens on T4 (1.5% sesame seed oil) and was similar to those on T3 (1.5% soybean oil), T5 (1.5% fish oil) and T6 (0.75% soybean oil + 0.75% fish oil). However, results revealed a significant (P<0.05) variations on eggs’ summation of polyunsaturated fatty acid (PUFA). In conclusion, it is suggested that dietary oils could be included in layers’ diets to produce designer eggs low in cholesterol and high in PUFA especially omega-3 fatty acids.

Keywords: dietary oils, egg cholesterol, egg fatty acid profile, egg quality parameters

Procedia PDF Downloads 308
445 Evaluation and Risk Assessment of Heavy Metals Pollution Using Edible Crabs, Based on Food Intended for Human Consumption

Authors: Nayab Kanwal, Noor Us Saher

Abstract:

The management and utilization of food resources is becoming a big issue due to rapid urbanization, wastage and non-sustainable use of food, especially in developing countries. Therefore, the use of seafood as alternative sources is strongly promoted worldwide. Marine pollution strongly affects marine organisms, which ultimately decreases their export quality. The monitoring of contamination in marine organisms is a good indicator of the environmental quality as well as seafood quality. Monitoring the accumulation of chemical elements within various tissues of organisms has become a useful tool to survey current or chronic levels of heavy metal exposure within an environment. In this perspective, this study was carried out to compare the previous and current levels (Year 2012 and 2014) of heavy metals (Cd, Pb, Cr, Cu and Zn) in crabs marketed in Karachi and to estimate the toxicological risk associated with their intake. The accumulation of metals in marine organisms, both essential (Cu and Zn) and toxic (Pb, Cd and Cr), natural and anthropogenic, is an actual food safety issue. Significant (p>0.05) variations in metal concentrations were found in all crab species between the two years, with most of the metals showing high accumulation in 2012. For toxicological risk assessment, EWI (Estimated weekly intake), Target Hazard quotient (THQ) and cancer risk (CR) were also assessed and high EWI, Non- cancer risk (THQ < 1) showed that there is no serious threat associated with the consumption of shellfish species on Karachi coast. The Cancer risk showed the highest risk from Cd and Pb pollution if consumed in excess. We summarize key environmental health research on health effects associated with exposure to contaminated seafood. It could be concluded that considering the Pakistan coast, these edible species may be sensitive and vulnerable to the adverse effects of environmental contaminants; more attention should be paid to the Pb and Cd metal bioaccumulation and to toxicological risks to seafood and consumers.

Keywords: cancer risk, edible crabs, heavy metals pollution, risk assessment

Procedia PDF Downloads 378
444 Advanced Electron Microscopy Study of Fission Products in a TRISO Coated Particle Neutron Irradiated to 3.6 X 1021 N/cm² Fast Fluence at 1040 ⁰C

Authors: Haiming Wen, Isabella J. Van Rooyen

Abstract:

Tristructural isotropic (TRISO)-coated fuel particles are designed as nuclear fuel for high-temperature gas reactors. TRISO coating consists of layers of carbon buffer, inner pyrolytic carbon (IPyC), SiC, and outer pyrolytic carbon. The TRISO coating, especially the SiC layer, acts as a containment system for fission products produced in the kernel. However, release of certain metallic fission products across intact TRISO coatings has been observed for decades. Despite numerous studies, mechanisms by which fission products migrate across the coating layers remain poorly understood. In this study, scanning transmission electron microscopy (STEM), energy dispersive X-ray spectroscopy (EDS), high-resolution transmission electron microscopy (HRTEM) and electron energy loss spectroscopy (EELS) were used to examine the distribution, composition and structure of fission products in a TRISO coated particle neutron irradiated to 3.6 x 1021 n/cm² fast fluence at 1040 ⁰C. Precession electron diffraction was used to investigate characters of grain boundaries where specific fission product precipitates are located. The retention fraction of 110mAg in the investigated TRISO particle was estimated to be 0.19. A high density of nanoscale fission product precipitates was observed in the SiC layer close to the SiC-IPyC interface, most of which are rich in Pd, while Ag was not identified. Some Pd-rich precipitates contain U. Precipitates tend to have complex structure and composition. Although a precipitate appears to have uniform contrast in STEM, EDS indicated that there may be composition variations throughout the precipitate, and HRTEM suggested that the precipitate may have several parts different in crystal structure or orientation. Attempts were made to measure charge states of precipitates using EELS and study their possible effect on precipitate transport.

Keywords: TRISO particle, fission product, nuclear fuel, electron microscopy, neutron irradiation

Procedia PDF Downloads 265
443 Relationship Between Wildfire and Plant Species in Arasbaran Forest, Iran

Authors: Zhila Hemati, Seyed Sajjad Hosseni, Sohrab Zamzami

Abstract:

In nature, forests serve a multitude of functions. They stabilize and nourish soil, store carbon, clean the air and water, and support biodiverse ecosystems. A natural disaster that can affect forests and ecosystems locally or globally is wildfires. Iran experiences annual forest fires that affect roughly 6000 hectares, with the Arasbaran forest being the most affected. These fires may be generated unnaturally by human activity in the forests, or they could occur naturally as a result of climate change. These days, wildfires pose a major natural risk. Wildfires significantly reduce the amount of property and human life in ecosystems globally. Concerns regarding the immediate and longterm effects have been raised by the rise in fire activity in various Iranian regions in recent decades. Natural ecosystem abundance, quality, and health will all be impacted by pasture and forest fires. Monitoring is the first line of defense against and control for forest fires. To determine the spatial-temporal variations of these occurrences in the vegetation regions of Arasbaran, this study was carried out to estimate the areas affected by fires. The findings indicated that July through September, which spans over 130000 hectares, is when fires in Arasbaran's vegetation areas occur to their greatest extent. A significant portion of the nation's forests caught fire in 2024, particularly in the northwest of the Arasbaran vegetation area. On the other hand, January through March sees the least number of fire locations in the Arasbaran vegetation areas. The Arasbaran forest experiences its greatest number of forest fires during the hot, dry months of the year. As a result, the linear association between the burned and active fire regions in the Arasbaran forest indicates a substantial relationship between species abundance and plant species. This link demonstrates that some of the active forest fire centers are the burned regions in Arasbaran's vegetation areas.

Keywords: wildfire, vegetation, plant species, forest

Procedia PDF Downloads 44
442 The Fabric of Culture: Deciphering the Discourse of Permitted and Prohibited Raw Materials for Clothing in Hadith Literature

Authors: Hadas Hirsch

Abstract:

Clothing is aimed at concealing and revealing the body, protecting it, and manifesting religious, political, and social declarations. The material and symbolic meanings of clothing and its raw materials are evaluated through the context of their social, cultural, and religious systems. The raw materials for clothing that were frequent and familiar in the 7th century Arab Peninsula were wool, leather, cotton, and some kinds of silk. The spread of the Muslim empire and the intersections with other religions and cultures enable the trickling of new raw materials that were unknown to Muslims or unaccepted. The sources for this research are hadith collections that discuss in details various kinds of textiles and their origin, together with a legal explanation that permits or prohibits its use. The paper will describe and analyze this discussion by contextualizing it in social, religious, and cultural reality that creates a structure of socio-religious dependency. The aim is not to identify, catalogue, and technically analyze fabrics but to reveal their role in Muslims’ life as a means of creating dependency for the community and setting borders inside and outside. The analysis is built upon a scale that starts with the most recommended raw materials, then comes the permitted ones and, in the end, the prohibited raw materials. This mapping will provide an insight into the ways textiles, as a cultural medium, help to shape and redefine identities and, at the same time, enable a sphere for creative expression within socio-cultural and religious limits and context. To sum up, hadith literature has the main role is characterizing Muslim clothing, from garments to textiles and colors, including multiple variations and contradicting aspects. The Muslim style of clothing and, in particular, textiles is a manifestation of the socio-religious structure of dependency that creates differentiated Muslim identity together with subdivision of gendered groups. Some other aspects are the tension between authenticity and imitation and the jurists’ pragmatic and practice attitude that enables an individual sphere of expression within the limits of jurisprudence.

Keywords: Hadith, jurisprudence, medieval Islam, material culture

Procedia PDF Downloads 95
441 Developing Family-Based Eco-Citizenship with Social Media: A Mixed Methods Collective Case Study of Families Looking to Adopt Ecologically Responsible Actions Using Facebook

Authors: Michel T. Leger, Shawn Martin

Abstract:

Leading an ecologically responsible lifestyle represents a difficult challenge. Though research in environmental education does point to an increase in the intention to act more responsibly towards the environment, this intent does not seem to translate to concrete ecological action. This mixed methods collective case study explores the adoption of ecological actions in the family, a context of socio-ecological transformation rarely examined in the scientific literature. More specifically, it takes into account the popular use of social media today to explore the potential role social media, namely Facebook, in promoting environmental action. In other words, for families who are intent on adopting an ecologically friendly lifestyle, could the use of Facebook positively affect the way family members relate to the environment and bring about real change in their daily household actions? To answer this question, twenty-one families living in an urban setting were recruited and then divided them into two distinct groups. The first group of families attempted to lower their household electrical bill as part of a private Facebook group, while the other aimed to do the same, but without the directed use of social media. For both groups, we recorded the amount of kilowatt-hours used during the project as well as the amount used for the same months the previous year, adjusting for temperature variations. Exit interviews were also conducted with each family in order to try to understand the processes of eco-citizenship development in the context of family. Results seem to suggest that both virtual social networks and one-on-one support can help to increase environmental awareness in participating family. Interestingly, families from the Facebook group seemed to demonstrate a higher degree of environmental engagement, and younger family members in this group were more active in the processes of collective behavioral change.

Keywords: environmental education, family-based eco-citizenship, social media, case study

Procedia PDF Downloads 150
440 Topography Effects on Wind Turbines Wake Flow

Authors: H. Daaou Nedjari, O. Guerri, M. Saighi

Abstract:

A numerical study was conducted to optimize the positioning of wind turbines over complex terrains. Thus, a two-dimensional disk model was used to calculate the flow velocity deficit in wind farms for both flat and complex configurations. The wind turbine wake was assessed using the hybrid methods that combine CFD (Computational Fluid Dynamics) with the actuator disc model. The wind turbine rotor has been defined with a thrust force, coupled with the Navier-Stokes equations that were resolved by an open source computational code (Code_Saturne V3.0 developed by EDF) The simulations were conducted in atmospheric boundary layer condition considering a two-dimensional region located at the north of Algeria at 36.74°N longitude, 02.97°E latitude. The topography elevation values were collected according to a longitudinal direction of 1km downwind. The wind turbine sited over topography was simulated for different elevation variations. The main of this study is to determine the topography effect on the behavior of wind farm wake flow. For this, the wake model applied in complex terrain needs to selects the singularity effects of topography on the vertical wind flow without rotor disc first. This step allows to determine the existence of mixing scales and friction forces zone near the ground. So, according to the ground relief the wind flow waS disturbed by turbulence and a significant speed variation. Thus, the singularities of the velocity field were thoroughly collected and thrust coefficient Ct was calculated using the specific speed. In addition, to evaluate the land effect on the wake shape, the flow field was also simulated considering different rotor hub heights. Indeed, the distance between the ground and the hub height of turbine (Hhub) was tested in a flat terrain for different locations as Hhub=1.125D, Hhub = 1.5D and Hhub=2D (D is rotor diameter) considering a roughness value of z0=0.01m. This study has demonstrated that topographical farm induce a significant effect on wind turbines wakes, compared to that on flat terrain.

Keywords: CFD, wind turbine wake, k-epsilon model, turbulence, complex topography

Procedia PDF Downloads 563
439 'Pacta Sunt Servanda': Which Form of Contract to Use in the Construction Industry

Authors: Ahmed Stifi, Sascha Gentes

Abstract:

The contract in its simplest definition is an agreement involving parties with a number of documents which may be as little as a marriage contract involving two parties or as big as a contract of construction and operation of a nuclear power plant involving companies and stakeholders with hundreds or even thousands of documents. All parties in the construction industry, not only the contract experts, agree that the success of a project is linked primarily to the form of contract regulating the relationship between stakeholders of the project. Therefore it is essential for the construction industry to study, analyze and improve its contracts forms continuously. However, it should be mentioned that different contract forms are developed to suit the construction evolution in term of its machinery, materials and construction process. There exist some similarities in some clauses and variations in many of these forms depending upon the type of project, the kind of clients and more importantly the laws and regulations governing the transaction in the country where the project is carried out. This paper will discuss the most important forms of construction contracts starting from national level, intended to the contract form in Germany and moving on to the international level introducing FIDIC contracts and its different forms, some newly developed contracts forms namely the integrated form of agreement, the new engineering contract and the project alliance agreement. The result of the study shows that many of the contract’s paragraphs are similar and the main difference comes in the approach of the relationship between the parties. Is it based on co-operation and mutual trust, or in some cases a load of responsibility for a particular party which increases the problems and disputes that affects the success of the project negatively. Thus we can say that the form of the contract, that plays an essential role in the approach of the project management, which is ultimately the key factor for the success of the project. So we advise to use a form of contract, which enhance the mutual trust between the project parties, contribute to support the cooperation between them, distribute responsibility and risks on an equitable basis and build on the principle “win-win". In additional to the conventional role of the contract it should integrate all parties into one team to achieve the target value of the project.

Keywords: contract, FIDIC, integrated form of agreement, new engineering contract, project alliance agreemen

Procedia PDF Downloads 373
438 Voice Quality in Italian-Speaking Children with Autism

Authors: Patrizia Bonaventura, Magda Di Renzo

Abstract:

This project aims to measure and assess the voice quality in children with autism. Few previous studies exist which have analyzed the voice quality of individuals with autism: abnormal voice characteristics have been found, like a high pitch, great pitch range, and sing-song quality. Existing studies did not focus specifically on Italian-speaking children’s voices and provided analysis of a few acoustic parameters. The present study aimed to gather more data and to perform acoustic analysis of the voice of children with autism in order to identify patterns of abnormal voice features that might shed some light on the causes of the dysphonia and possibly be used to create a pediatric assessment tool for early identification of autism. The participants were five native Italian-speaking boys with autism between the age of 4 years and 10 years (mean 6.8 ± SD 1.4). The children had a diagnosis of autism, were verbal, and had no other comorbid conditions (like Down syndrome or ADHD). The voices of the autistic children were recorded in the production of sustained vowels [ah] and [ih] and of sentences from the Italian version of the CAPE-V voice assessment test. The following voice parameters, representative of normal quality, were analyzed by acoustic spectrography through Praat: Speaking Fundamental Frequency, F0 range, average intensity, and dynamic range. The results showed that the pitch parameters (Speaking Fundamental Frequency and F0 range), as well as the intensity parameters (average intensity and dynamic range), were significantly different from the relative normal reference thresholds. Also, variability among children was found, so confirming a tendency revealed in previous studies of individual variation in these aspects of voice quality. The results indicate a general pattern of abnormal voice quality characterized by a high pitch and large variations in pitch and intensity. These acoustic voice characteristics found in Italian-speaking autistic children match those found in children speaking other languages, indicating that autism symptoms affecting voice quality might be independent of the native language of the children.

Keywords: autism, voice disorders, speech science, acoustic analysis of voice

Procedia PDF Downloads 71
437 Effect of Helical Flow on Separation Delay in the Aortic Arch for Different Mechanical Heart Valve Prostheses by Time-Resolved Particle Image Velocimetry

Authors: Qianhui Li, Christoph H. Bruecker

Abstract:

Atherosclerotic plaques are typically found where flow separation and variations of shear stress occur. Although helical flow patterns and flow separations have been recorded in the aorta, their relation has not been clearly clarified and especially in the condition of artificial heart valve prostheses. Therefore, an experimental study is performed to investigate the hemodynamic performance of different mechanical heart valves (MHVs), i.e. the SJM Regent bileaflet mechanical heart valve (BMHV) and the Lapeyre-Triflo FURTIVA trileaflet mechanical heart valve (TMHV), in a transparent model of the human aorta under a physiological pulsatile right-hand helical flow condition. A typical systolic flow profile is applied in the pulse-duplicator to generate a physiological pulsatile flow which thereafter flows past an axial turbine blade structure to imitate the right-hand helical flow induced in the left ventricle. High-speed particle image velocimetry (PIV) measurements are used to map the flow evolution. A circular open orifice nozzle inserted in the valve plane as the reference configuration initially replaces the valve under investigation to understand the hemodynamic effects of the entered helical flow structure on the flow evolution in the aortic arch. Flow field analysis of the open orifice nozzle configuration illuminates the helical flow effectively delays the flow separation at the inner radius wall of the aortic arch. The comparison of the flow evolution for different MHVs shows that the BMHV works like a flow straightener which re-configures the helical flow pattern into three parallel jets (two side-orifice jets and the central orifice jet) while the TMHV preserves the helical flow structure and therefore prevent the flow separation at the inner radius wall of the aortic arch. Therefore the TMHV is of better hemodynamic performance and reduces the pressure loss.

Keywords: flow separation, helical aortic flow, mechanical heart valve, particle image velocimetry

Procedia PDF Downloads 174