Search results for: organisational performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13022

Search results for: organisational performance

722 The Application of Raman Spectroscopy in Olive Oil Analysis

Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli

Abstract:

Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.

Keywords: authentication, chemometrics, olive oil, raman spectroscopy

Procedia PDF Downloads 332
721 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 234
720 Argos System: Improvements and Future of the Constellation

Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard

Abstract:

Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.

Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services

Procedia PDF Downloads 182
719 Efficacy of Pooled Sera in Comparison with Commercially Acquired Quality Control Sample for Internal Quality Control at the Nkwen District Hospital Laboratory

Authors: Diom Loreen Ndum, Omarine Njimanted

Abstract:

With increasing automation in clinical laboratories, the requirements for quality control materials have greatly increased in order to monitor daily performance. The constant use of commercial control material is not economically feasible for many developing countries because of non-availability or the high-cost of the materials. Therefore, preparation and use of in-house quality control serum will be a very cost-effective measure with respect to laboratory needs.The objective of this study was to determine the efficacy of in-house prepared pooled sera with respect to commercially acquired control sample for routine internal quality control at the Nkwen District Hospital Laboratory. This was an analytical study, serum was taken from leftover serum samples of 5 healthy adult blood donors at the blood bank of Nkwen District Hospital, which had been screened negative for human immunodeficiency virus (HIV), hepatitis C virus (HCV) and Hepatitis B antigens (HBsAg), and were pooled together in a sterile container. From the pooled sera, sixty aliquots of 150µL each were prepared. Forty aliquots of 150µL each of commercially acquired samples were prepared after reconstitution and stored in a deep freezer at − 20°C until it was required for analysis. This study started from the 9th June to 12th August 2022. Every day, alongside with commercial control sample, one aliquot of pooled sera was removed from the deep freezer and allowed to thaw before analyzed for the following parameters: blood urea, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), potassium and sodium. After getting the first 20 values for each parameter of pooled sera, the mean, standard deviation and coefficient of variation were calculated, and a Levey-Jennings (L-J) chart established. The mean and standard deviation for commercially acquired control sample was provided by the manufacturer. The following results were observed; pooled sera had lesser standard deviation for creatinine, urea and AST than commercially acquired control samples. There was statistically significant difference (p<0.05) between the mean values of creatinine, urea and AST for in-house quality control when compared with commercial control. The coefficient of variation for the parameters for both commercial control and in-house control samples were less than 30%, which is an acceptable difference. The L-J charts revealed shifts and trends (warning signs), so troubleshooting and corrective measures were taken. In conclusion, in-house quality control sample prepared from pooled serum can be a good control sample for routine internal quality control.

Keywords: internal quality control, levey-jennings chart, pooled sera, shifts, trends, westgard rules

Procedia PDF Downloads 78
718 Ultrasonic Studies of Polyurea Elastomer Composites with Inorganic Nanoparticles

Authors: V. Samulionis, J. Banys, A. Sánchez-Ferrer

Abstract:

Inorganic nanoparticles are used for fabrication of various composites based on polymer materials because they exhibit a good homogeneity and solubility of the composite material. Multifunctional materials based on composites of a polymer containing inorganic nanotubes are expected to have a great impact on industrial applications in the future. An emerging family of such composites are polyurea elastomers with inorganic MoS2 nanotubes or MoSI nanowires. Polyurea elastomers are a new kind of materials with higher performance than polyurethanes. The improvement of mechanical, chemical and thermal properties is due to the presence of hydrogen bonds between the urea motives which can be erased at high temperature softening the elastomeric network. Such materials are the combination of amorphous polymers above glass transition and crosslinkers which keep the chains into a single macromolecule. Polyurea exhibits a phase separated structure with rigid urea domains (hard domains) embedded in a matrix of flexible polymer chains (soft domains). The elastic properties of polyurea can be tuned over a broad range by varying the molecular weight of the components, the relative amount of hard and soft domains, and concentration of nanoparticles. Ultrasonic methods as non-destructive techniques can be used for elastomer composites characterization. In this manner, we have studied the temperature dependencies of the longitudinal ultrasonic velocity and ultrasonic attenuation of these new polyurea elastomers and composites with inorganic nanoparticles. It was shown that in these polyurea elastomers large ultrasonic attenuation peak and corresponding velocity dispersion exists at 10 MHz frequency below room temperature and this behaviour is related to glass transition Tg of the soft segments in the polymer matrix. The relaxation parameters and Tg depend on the segmental molecular weight of the polymer chains between crosslinking points, the nature of the crosslinkers in the network and content of MoS2 nanotubes or MoSI nanowires. The increase of ultrasonic velocity in composites modified by nanoparticles has been observed, showing the reinforcement of the elastomer. In semicrystalline polyurea elastomer matrices, above glass transition, the first order phase transition from quasi-crystalline to the amorphous state has been observed. In this case, the sharp ultrasonic velocity and attenuation anomalies were observed near the transition temperature TC. Ultrasonic attenuation maximum related to glass transition was reduced in quasicrystalline polyureas indicating less influence of soft domains below TC. The first order phase transition in semicrystalline polyurea elastomer samples has large temperature hysteresis (> 10 K). The impact of inorganic MoS2 nanotubes resulted in the decrease of the first order phase transition temperature in semicrystalline composites.

Keywords: inorganic nanotubes, polyurea elastomer composites, ultrasonic velocity, ultrasonic attenuation

Procedia PDF Downloads 301
717 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures

Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar

Abstract:

In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.

Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization

Procedia PDF Downloads 210
716 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 124
715 Evaluation of Different Cropping Systems under Organic, Inorganic and Integrated Production Systems

Authors: Sidramappa Gaddnakeri, Lokanath Malligawad

Abstract:

Any kind of research on production technology of individual crop / commodity /breed has not brought sustainability or stability in crop production. The sustainability of the system over years depends on the maintenance of the soil health. Organic production system includes use of organic manures, biofertilizers, green manuring for nutrient supply and biopesticides for plant protection helps to sustain the productivity even under adverse climatic condition. The study was initiated to evaluate the performance of different cropping systems under organic, inorganic and integrated production systems at The Institute of Organic Farming, University of Agricultural Sciences, Dharwad (Karnataka-India) under ICAR Network Project on Organic Farming. The trial was conducted for four years (2013-14 to 2016-17) on fixed site. Five cropping systems viz., sequence cropping of cowpea – safflower, greengram– rabi sorghum, maize-bengalgram, sole cropping of pigeonpea and intercropping of groundnut + cotton were evaluated under six nutrient management practices. The nutrient management practices are NM1 (100% Organic farming (Organic manures equivalent to 100% N (Cereals/cotton) or 100% P2O5 (Legumes), NM2 (75% Organic farming (Organic manures equivalent to 75% N (Cereals/cotton) or 100% P2O5 (Legumes) + Cow urine and Vermi-wash application), NM3 (Integrated farming (50% Organic + 50% Inorganic nutrients, NM4 (Integrated farming (75% Organic + 25% Inorganic nutrients, NM5 (100% Inorganic farming (Recommended dose of inorganic fertilizers)) and NM6 (Recommended dose of inorganic fertilizers + Recommended rate of farm yard manure (FYM). Among the cropping systems evaluated for different production systems indicated that the Groundnut + Hybrid cotton (2:1) intercropping system found more remunerative as compared to Sole pigeonpea cropping system, Greengram-Sorghum sequence cropping system, Maize-Chickpea sequence cropping system and Cowpea-Safflower sequence cropping system irrespective of the production systems. Production practices involving application of recommended rates of fertilizers + recommended rates of organic manures (Farmyard manure) produced higher net monetary returns and higher B:C ratio as compared to integrated production system involving application of 50 % organics + 50 % inorganic and application of 75 % organics + 25 % inorganic and organic production system only Both the two organic production systems viz., 100 % Organic production system (Organic manures equivalent to 100 % N (Cereals/cotton) or 100 % P2O5 (Legumes) and 75 % Organic production system (Organic manures equivalent to 75 % N (Cereals) or 100 % P2O5 (Legumes) + Cow urine and Vermi-wash application) are found to be on par. Further, integrated production system involving application of organic manures and inorganic fertilizers found more beneficial over organic production systems.

Keywords: cropping systems, production systems, cowpea, safflower, greengram, pigeonpea, groundnut, cotton

Procedia PDF Downloads 201
714 First-Trimester Screening of Preeclampsia in a Routine Care

Authors: Tamar Grdzelishvili, Zaza Sinauridze

Abstract:

Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.

Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein

Procedia PDF Downloads 77
713 Co-pyrolysis of Sludge and Kaolin/Zeolite to Stabilize Heavy Metals

Authors: Qian Li, Zhaoping Zhong

Abstract:

Sewage sludge, a typical solid waste, has inevitably been produced in enormous quantities in China. Still worse, the amount of sewage sludge produced has been increasing due to rapid economic development and urbanization. Compared to the conventional method to treat sewage sludge, pyrolysis has been considered an economic and ecological technology because it can significantly reduce the sludge volume, completely kill pathogens, and produce valuable solid, gas, and liquid products. However, the large-scale utilization of sludge biochar has been limited due to the considerable risk posed by heavy metals in the sludge. Heavy metals enriched in pyrolytic biochar could be divided into exchangeable, reducible, oxidizable, and residual forms. The residual form of heavy metals is the most stable and cannot be used by organisms. Kaolin and zeolite are environmentally friendly inorganic minerals with a high surface area and heat resistance characteristics. So, they exhibit the enormous potential to immobilize heavy metals. In order to reduce the risk of leaching heavy metals in the pyrolysis biochar, this study pyrolyzed sewage sludge mixed with kaolin/zeolite in a small rotary kiln. The influences of additives and pyrolysis temperature on the leaching concentration and morphological transformation of heavy metals in pyrolysis biochar were investigated. The potential mechanism of stabilizing heavy metals in the co-pyrolysis of sludge blended with kaolin/zeolite was explained by scanning electron microscopy, X-ray diffraction, and specific surface area and porosity analysis. The European Community Bureau of Reference sequential extraction procedure has been applied to analyze the forms of heavy metals in sludge and pyrolysis biochar. All the concentrations of heavy metals were examined by flame atomic absorption spectrophotometry. Compared with the proportions of heavy metals associated with the F4 fraction in pyrolytic carbon prepared without additional agents, those in carbon obtained by co-pyrolysis of sludge and kaolin/zeolite increased. Increasing the additive dosage could improve the proportions of the stable fraction of various heavy metals in biochar. Kaolin exhibited a better effect on stabilizing heavy metals than zeolite. Aluminosilicate additives with excellent adsorption performance could capture more released heavy metals during sludge pyrolysis. Then heavy metal ions would react with the oxygen ions of additives to form silicate and aluminate, causing the conversion of heavy metals from unstable fractions (sulfate, chloride, etc.) to stable fractions (silicate, aluminate, etc.). This study reveals that the efficiency of stabilizing heavy metals depends on the formation of stable mineral compounds containing heavy metals in pyrolysis biochar.

Keywords: co-pyrolysis, heavy metals, immobilization mechanism, sewage sludge

Procedia PDF Downloads 67
712 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention

Authors: Thomas Larm, Anna Wahlsten

Abstract:

An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.

Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch

Procedia PDF Downloads 88
711 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades

Authors: Chi Zhang, Hua-Peng Chen

Abstract:

With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.

Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.

Procedia PDF Downloads 413
710 A Study of Kinematical Parameters I9N Instep Kicking in Soccer

Authors: Abdolrasoul Daneshjoo

Abstract:

Introduction: Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill. Purpose: The aim of the present study was to study of a few kinematical parameters in instep kicking from 3 and 5 meter distance among the male and female elite soccer players. Methods: 24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 500 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders. Results and Discussion: Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact. Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.

Keywords: biomechanics, kinematics, soccer, instep kick, male, female

Procedia PDF Downloads 415
709 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 302
708 Teacher Professional Development in Saudi Arabia through the Implementation of Universal Design for Learning

Authors: Majed A. Alsalem

Abstract:

Universal Design for Learning (UDL) is common theme in education across the US and an influential model and framework that enables students in general and particularly students who are deaf and hard of hearing (DHH) to access the general education curriculum. UDL helps teachers determine how information will be presented to students and how to keep students engaged. Moreover, UDL helps students to express their understanding and knowledge to others. UDL relies on technology to promote students' interaction with content and their communication of knowledge. This study included 120 DHH students who received daily instruction based on UDL principles. This study presents the results of the study and discusses its implications for the integration of UDL in day-to-day practice as well as in the country's education policy. UDL is a Western concept that began and grew in the US, and it has just begun to transfer to other countries such as Saudi Arabia. It will be very important to researchers, practitioners, and educators to see how UDL is being implemented in a new place with a different culture. UDL is a framework that is built to provide multiple means of engagement, representation, and action and expression that should be part of curricula and lessons for all students. The purpose of this study is to investigate the variables associated with the implementation of UDL in Saudi Arabian schools and identify the barriers that could prevent the implementation of UDL. Therefore, this study used a mixed methods design that use both quantitative and qualitative methods. More insights will be gained by including both quantitative and qualitative rather than using a single method. By having methods that different concepts and approaches, the databases will be enriched. This study uses levels of collecting date through two stages in order to insure that the data comes from multiple ways to mitigate validity threats and establishing trustworthiness in the findings. The rationale and significance of this study is that it will be the first known research that targets UDL in Saudi Arabia. Furthermore, it will deal with UDL in depth to set the path for further studies in the Middle East. From a perspective of content, this study considers teachers’ implementation knowledge, skills, and concerns of implementation. This study deals with effective instructional designs that have not been presented in any conferences, workshops, teacher preparation and professional development programs in Saudi Arabia. Specifically, Saudi Arabian schools are challenged to design inclusive schools and practices as well as to support all students’ academic skills development. The total participants in stage one were 336 teachers of DHH students. The results of the intervention indicated significant differences among teachers before and after taking the training sessions associated with their understanding and level of concern. Teachers have indicated interest in knowing more about UDL and adopting it into their practices; they reported that UDL has benefits that will enhance their performance for supporting student learning.

Keywords: deaf and hard of hearing, professional development, Saudi Arabia, universal design for learning

Procedia PDF Downloads 432
707 Effect of Plant Growth Regulators on in vitro Biosynthesis of Antioxidative Compounds in Callus Culture and Regenerated Plantlets Derived from Taraxacum officinale

Authors: Neha Sahu, Awantika Singh, Brijesh Kumar, K. R. Arya

Abstract:

Taraxacum officinale Weber or dandelion (Asteraceae) is an important Indian traditional herb used to treat liver detoxification, digestive problems, spleen, hepatic and kidney disorders, etc. The plant is well known to possess important phenolic and flavonoids to serve as a potential source of antioxidative and chemoprotective agents. Biosynthesis of bioactive compounds through in vitro cultures is a requisite for natural resource conservation and to provide an alternative source for pharmaceutical applications. Thus an efficient and reproducible protocol was developed for in vitro biosynthesis of bioactive antioxidative compounds from leaf derived callus and in vitro regenerated cultures of Taraxacum officinale using MS media fortified with various combinations of auxins and cytokinins. MS media containing 0.25 mg/l 2, 4-D (2, 4-Dichloro phenoxyacetic acid) with 0.05 mg/l 2-iP [N6-(2-Isopentenyl adenine)] was found as an effective combination for the establishment of callus with 92 % callus induction frequency. Moreover, 2.5 mg/l NAA (α-Naphthalene acetic acid) with 0.5 mg/l BAP (6-Benzyl aminopurine) and 1.5 mg/l NAA showed the optimal response for in vitro plant regeneration with 80 % regeneration frequency and rooting respectively. In vitro regenerated plantlets were further transferred to soil and acclimatized. Quantitative variability of accumulated bioactive compounds in cultures (in vitro callus, plantlets and acclimatized) were determined through UPLC-MS/MS (ultra-performance liquid chromatography-triple quadrupole-linear ion trap mass spectrometry) and compared with wild plants. The phytochemical determination of in vitro and wild grown samples showed the accumulation of 6 compounds. In in vitro callus cultures and regenerated plantlets, two major antioxidative compounds i.e. chlorogenic acid (14950.0 µg/g and 4086.67 µg/g) and umbelliferone (10400.00 µg/g and 2541.67 µg/g) were found respectively. Scopoletin was found to be highest in vitro regenerated plants (83.11 µg/g) as compared to wild plants (52.75 µg/g). Notably, scopoletin is not detected in callus and acclimatized plants, but quinic acid (6433.33 µg/g) and protocatechuic acid (92.33 µg/g) were accumulated at the highest level in acclimatized plants as compared to other samples. Wild grown plants contained highest content (948.33 µg/g) of flavonoid glycoside i.e. luteolin-7-O-glucoside. Our data suggests that in vitro callus and regenerated plants biosynthesized higher content of antioxidative compounds in controlled conditions when compared to wild grown plants. These standardized cultural conditions may be explored as a sustainable source of plant materials for enhanced production and adequate supply of oxidative polyphenols.

Keywords: anti-oxidative compounds, in vitro cultures, Taraxacum officinale, UPLC-MS/MS

Procedia PDF Downloads 203
706 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 174
705 Investigation of Mass Transfer for RPB Distillation at High Pressure

Authors: Amiza Surmi, Azmi Shariff, Sow Mun Serene Lock

Abstract:

In recent decades, there has been a significant emphasis on the pivotal role of Rotating Packed Beds (RPBs) in absorption processes, encompassing the removal of Volatile Organic Compounds (VOCs) from groundwater, deaeration, CO2 absorption, desulfurization, and similar critical applications. The primary focus is elevating mass transfer rates, enhancing separation efficiency, curbing power consumption, and mitigating pressure drops. Additionally, substantial efforts have been invested in exploring the adaptation of RPB technology for offshore deployment. This comprehensive study delves into the intricacies of nitrogen removal under low temperature and high-pressure conditions, employing the high gravity principle via innovative RPB distillation concept with a specific emphasis on optimizing mass transfer. Based on the author's knowledge and comprehensive research, no cryogenic experimental testing was conducted to remove nitrogen via RPB. The research identifies pivotal process control factors through meticulous experimental testing, with pressure, reflux ratio, and reboil ratio emerging as critical determinants in achieving the desired separation performance. The results are remarkable, with nitrogen removal reaching less than one mole% in the Liquefied Natural Gas (LNG) product and less than three moles% methane in the nitrogen-rich gas stream. The study further unveils the mass transfer coefficient, revealing a noteworthy trend of decreasing Number of Transfer Units (NTU) and Area of Transfer Units (ATU) as the rotational speed escalates. Notably, the condenser and reboiler impose varying demands based on the operating pressure, with lower pressures at 12 bar requiring a more substantial duty than the 15-bar operation of the RPB. In pursuit of optimal energy efficiency, a meticulous sensitivity analysis is conducted, pinpointing the ideal combination of pressure and rotating speed that minimizes overall energy consumption. These findings underscore the efficiency of the RPB distillation approach in effecting efficient separation, even when operating under the challenging conditions of low temperature and high pressure. This achievement is attributed to a rigorous process control framework that diligently manages the operational pressure and temperature profile of the RPB. Nonetheless, the study's conclusions point towards the need for further research to address potential scaling challenges and associated risks, paving the way for the industrial implementation of this transformative technology.

Keywords: mass transfer coefficient, nitrogen removal, liquefaction, rotating packed bed

Procedia PDF Downloads 54
704 Neuroprotection against N-Methyl-D-Aspartate-Induced Optic Nerve and Retinal Degeneration Changes by Philanthotoxin-343 to Alleviate Visual Impairments Involve Reduced Nitrosative Stress

Authors: Izuddin Fahmy Abu, Mohamad Haiqal Nizar Mohamad, Muhammad Fattah Fazel, Renu Agarwal, Igor Iezhitsa, Nor Salmah Bakar, Henrik Franzyk, Ian Mellor

Abstract:

Glaucoma is the global leading cause of irreversible blindness. Currently, the available treatment strategy only involves lowering intraocular pressure (IOP); however, the condition often progresses despite lowered or normal IOP in some patients. N-methyl-D-aspartate receptor (NMDAR) excitotoxicity often occurs in neurodegeneration-related glaucoma; thus it is a relevant target to develop a therapy based on neuroprotection approach. This study investigated the effects of Philanthotoxin-343 (PhTX-343), an NMDAR antagonist, on the neuroprotection of NMDA-induced glaucoma to alleviate visual impairments. Male Sprague-Dawley rats were equally divided: Groups 1 (control) and 2 (glaucoma) were intravitreally injected with phosphate buffer saline (PBS) and NMDA (160nM), respectively, while group 3 was pre-treated with PhTX-343 (160nM) 24 hours prior to NMDA injection. Seven days post-treatments, rats were subjected to visual behavior assessments and subsequently euthanized to harvest their retina and optic nerve tissues for histological analysis and determination of nitrosative stress level using 3-nitrotyrosine ELISA. Visual behavior assessments via open field, object, and color recognition tests demonstrated poor visual performance in glaucoma rats indicated by high exploratory behavior. PhTX-343 pre-treatment appeared to preserve visual abilities as all test results were significantly improved (p < 0.05). H&E staining of the retina showed a marked reduction of ganglion cell layer thickness in the glaucoma group; in contrast, PhTX-343 significantly increased the number by 1.28-folds (p < 0.05). PhTX-343 also increased the number of cell nuclei/100μm2 within inner retina by 1.82-folds compared to the glaucoma group (p < 0.05). Toluidine blue staining of optic nerve tissues showed that PhTX-343 reduced the degeneration changes compared to the glaucoma group which exhibited vacuolation overall sections. PhTX-343 also decreased retinal 3- nitrotyrosine concentration by 1.74-folds compared to the glaucoma group (p < 0.05). All results in PhTX-343 group were comparable to control (p > 0.05). We conclude that PhTX-343 protects against NMDA-induced changes and visual impairments in the rat model by reducing nitrosative stress levels.

Keywords: excitotoxicity, glaucoma, nitrosative stress , NMDA receptor , N-methyl-D-aspartate , philanthotoxin, visual behaviour

Procedia PDF Downloads 137
703 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid

Authors: Benjamin Blat Belmonte, Stephan Rinderknecht

Abstract:

The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.

Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market

Procedia PDF Downloads 74
702 Complementing Assessment Processes with Standardized Tests: A Work in Progress

Authors: Amparo Camacho

Abstract:

ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.

Keywords: assessment, hard skills, soft skills, standardized tests

Procedia PDF Downloads 285
701 Use of Cassava Waste and Its Energy Potential

Authors: I. Inuaeyen, L. Phil, O. Eni

Abstract:

Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.

Keywords: bio-refinery, cassava waste, energy, process modelling

Procedia PDF Downloads 376
700 Analyzing the Use of Augmented and Virtual Reality to Teach Social Skills to Students with Autism

Authors: Maggie Mosher, Adam Carreon, Sean Smith

Abstract:

A systematic literature review was conducted to explore the evidence base on the use of augmented reality (AR), virtual reality (VR), mixed reality (MR), and extended reality (XR) to present social skill instruction to school-age students with autism spectrum disorder (ASD). Specifically, the systematic review focus was on a. the participants and intervention agents using AR, VR, MR, and XR for social skill acquisition b. the social skills taught through these mediums and c. the social validity measures (i.e., goals, procedures, and outcomes) reported in these studies. Forty-one articles met the inclusion criteria. Researchers in six studies taught social skills to students through AR, in 27 studies through non-immersive VR, and in 10 studies through immersive VR. No studies used MR or XR. The primary targeted social skills were relationship skills, emotion recognition, social awareness, cooperation, and executive functioning. An intervention to improve many social skills was implemented by 73% of researchers, 17% taught a single skill, and 10% did not clearly state the targeted skill. The intervention was considered effective in 26 of the 41 studies (63%), not effective in four studies (10%), and 11 studies (27%) reported mixed results. No researchers reported information for all 17 social validity indicators. The social validity indicators reported by researchers ranged from two to 14. Social validity measures on the feelings toward and use of the technology were provided in 22 studies (54%). Findings indicated both AR and VR are promising platforms for providing social skill instruction to students with ASD. Studies utilizing this technology show a number of social validity indicators. However, the limited information provided on the various interventions, participant characteristics, and validity measures, offers insufficient evidence of the impact of these technologies in teaching social skills to students with ASD. Future research should develop a protocol for training treatment agents to assess the role of different variables (i.e., whether agents are customizing content, monitoring student learning, using intervention specific vocabulary in their day to day instruction). Sustainability may be increased by providing training in the technology to both treatment agents and participants. Providing scripts of instruction occurring within the intervention would provide the needed information to determine the primary method of teaching within the intervention. These variables play a role in maintenance and generalization of the social skills. Understanding the type of feedback provided would help researchers determine if students were able to feel rewarded for progressing through the scenarios or if students require rewarding aspects within the intervention (i.e., badges, trophies). AR has the potential to generalize instruction and VR has the potential for providing a practice environment for performance deficits. Combining these two technologies into a mixed reality intervention may provide a more cohesive and effective intervention.

Keywords: autism, augmented reality, social and emotional learning, social skills, virtual reality

Procedia PDF Downloads 110
699 Different Response of Pure Arctic Char Salvelinus alpinus and Hybrid (Salvelinus alpinus vs. Salvelinus fontinalis Mitchill) to Various Hyperoxic Regimes

Authors: V. Stejskal, K. Lundova, R. Sebesta, T. Vanina, S. Roje

Abstract:

Pure strain of Arctic char (AC) Salvelinus alpinus and hybrid (HB) Salvelinus alpinus vs. Salvelinus fontinalis Mitchill belong to fish, which with great potential for culture in recirculating aquaculture systems (RAS). Aquaculture of these fish currently use flow-through systems (FTS), especially in Nordic countries such as Iceland (biggest producer), Norway, Sweden, and Canada. Four different water saturation regimes included normoxia (NOR), permanent hyperoxia (HYP), intermittent hyperoxia (HYP ± ) and regimes where one day of normoxia was followed by one day of hyperoxia (HYP1/1) were tested during 63 days of experiment in both species in two parallel experiments. Fish were reared in two identical RAS system consisted of 24 plastic round tanks (300 L each), drum filter, biological filter with moving beads and submerged biofilter. The temperature was maintained using flow-through cooler during at level of 13.6 ± 0.8 °C. Different water saturation regimes were achieved by mixing of pure oxygen (O₂) with water in three (one for each hyperoxic regime) mixing tower equipped with flowmeter for regulation of gas inflow. The water in groups HYP, HYP1/1 and HYP± was enriched with oxygen up to saturation of 120-130%. In HYP group was this level kept during whole day. In HYP ± group was hyperoxia kept for daylight phase (08:00-20:00) only and during night time was applied normoxia in this group. The oxygen saturation of 80-90% in NOR group was created using intensive aeration in header tank. The fish were fed with commercial feed to slight excess at 2 h intervals within the light phase of the day. Water quality parameters like pH, temperature and level of oxygen was monitoring three times (7 am, 10 am and 6 pm) per day using handy multimeter. Ammonium, nitrite and nitrate were measured in two day interval using spectrophotometry. Initial body weight (BW) was 40.9 ± 8.7 g and 70.6 ± 14.8 in AC and HB group, respectively. Final survival of AC ranged from 96.3 ± 4.6 (HYP) to 100 ± 0.0% in all other groups without significant differences among these groups. Similarly very high survival was reached in trial with HB with levels from 99.2 ± 1.3 (HYP, HYP1/1 and NOR) to 100 ± 0.0% (HYP ± ). HB fish showed best growth performance in NOR group reached final body weight (BW) 180.4 ± 2.3 g. Fish growth under different hyperoxic regimes was significantly reduced and final BW was 164.4 ± 7.6, 162.1 ± 12.2 and 151.7 ± 6.8 g in groups HY1/1, HYP ± and HYP, respectively. AC showed different preference for hyperoxic regimes as there were no significant difference in BW among NOR, HY1/1 and HYP± group with final values of 72.3 ± 11.3, 68.3 ± 8.4 and 77.1 ± 6.1g. Significantly reduced growth (BW 61.8 ± 6.8 g) was observed in HYP group. It is evident from present study that there are differences between pure bred Arctic char and hybrid in relation to hyperoxic regimes. The study was supported by projects 'CENAKVA' (No. CZ.1.05/2.1.00/01.0024), 'CENAKVA II' (No. LO1205 under the NPU I program), NAZV (QJ1510077) and GAJU (No. 060/2016/Z).

Keywords: recirculating aquaculture systems, Salmonidae, hyperoxia, abiotic factors

Procedia PDF Downloads 182
698 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners

Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas

Abstract:

Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.

Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy

Procedia PDF Downloads 241
697 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
696 Bituminous Geomembranes: Sustainable Products for Road Construction and Maintenance

Authors: Ines Antunes, Andrea Massari, Concetta Bartucca

Abstract:

Greenhouse gasses (GHG) role in the atmosphere has been well known since the 19th century; however, researchers have begun to relate them to climate changes only in the second half of the following century. From this moment, scientists started to correlate the presence of GHG such as CO₂ with the global warming phenomena. This has raised the awareness not only of those who were experts in this field but also of public opinion, which is becoming more and more sensitive to environmental pollution and sustainability issues. Nowadays the reduction of GHG emissions is one of the principal objectives of EU nations. The target is an 80% reduction of emissions in 2050 and to reach the important goal of carbon neutrality. Road sector is responsible for an important amount of those emissions (about 20%). The most part is due to traffic, but a good contribution is also given directly or indirectly from road construction and maintenance. Raw material choice and reuse of post-consumer plastic rather than a cleverer design of roads have an important contribution to reducing carbon footprint. Bituminous membranes can be successfully used as reinforcement systems in asphalt layers to improve road pavement performance against cracking. Composite materials coupling membranes with grids and/or fabrics should be able to combine improved tensile properties of the reinforcement with stress absorbing and waterproofing effects of membranes. Polyglass, with its brand dedicated to road construction and maintenance called Polystrada, has done more than this. The company's target was not only to focus sustainability on the final application but also to implement a greener mentality from the cradle to the grave. Starting from production, Polyglass has made important improvements finalized to increase efficiency and minimize waste. The installation of a trigeneration plant and the usage of selected production scraps inside the products as well as the reduction of emissions into the environment, are one of the main efforts of the company to reduce impact during final product build-up. Moreover, the benefit given by installing Polystrada products brings a significant improvement in road lifetime. This has an impact not only on the number of maintenance or renewal that needs to be done (build less) but also on traffic density due to works and road deviation in case of operations. During the end of the life of a road, Polystrada products can be 100% recycled and milled with classical systems used without changing the normal maintenance procedures. In this work, all these contributions were quantified in terms of CO₂ emission thanks to an LCA analysis. The data obtained were compared with a classical system or a standard production of a membrane. What it is possible to see is that the usage of Polyglass products for street maintenance and building gives a significant reduction of emissions in case of membrane installation under the road wearing course.

Keywords: CO₂ emission, LCA, maintenance, sustainability

Procedia PDF Downloads 67
695 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: anammox, filter media, kinetics, nitrogen removal

Procedia PDF Downloads 382
694 Intelligent Campus Monitoring: YOLOv8-Based High-Accuracy Activity Recognition

Authors: A. Degale Desta, Tamirat Kebamo

Abstract:

Background: Recent advances in computer vision and pattern recognition have significantly improved activity recognition through video analysis, particularly with the application of Deep Convolutional Neural Networks (CNNs). One-stage detectors now enable efficient video-based recognition by simultaneously predicting object categories and locations. Such advancements are highly relevant in educational settings where CCTV surveillance could automatically monitor academic activities, enhancing security and classroom management. However, current datasets and recognition systems lack the specific focus on campus environments necessary for practical application in these settings.Objective: This study aims to address this gap by developing a dataset and testing an automated activity recognition system specifically tailored for educational campuses. The EthioCAD dataset was created to capture various classroom activities and teacher-student interactions, facilitating reliable recognition of academic activities using deep learning models. Method: EthioCAD, a novel video-based dataset, was created with a design science research approach to encompass teacher-student interactions across three domains and 18 distinct classroom activities. Using the Roboflow AI framework, the data was processed, with 4.224 KB of frames and 33.485 MB of images managed for frame extraction, labeling, and organization. The Ultralytics YOLOv8 model was then implemented within Google Colab to evaluate the dataset’s effectiveness, achieving high mean Average Precision (mAP) scores. Results: The YOLOv8 model demonstrated robust activity recognition within campus-like settings, achieving an mAP50 of 90.2% and an mAP50-95 of 78.6%. These results highlight the potential of EthioCAD, combined with YOLOv8, to provide reliable detection and classification of classroom activities, supporting automated surveillance needs on educational campuses. Discussion: The high performance of YOLOv8 on the EthioCAD dataset suggests that automated activity recognition for surveillance is feasible within educational environments. This system addresses current limitations in campus-specific data and tools, offering a tailored solution for academic monitoring that could enhance the effectiveness of CCTV systems in these settings. Conclusion: The EthioCAD dataset, alongside the YOLOv8 model, provides a promising framework for automated campus activity recognition. This approach lays the groundwork for future advancements in CCTV-based educational surveillance systems, enabling more refined and reliable monitoring of classroom activities.

Keywords: deep CNN, EthioCAD, deep learning, YOLOv8, activity recognition

Procedia PDF Downloads 16
693 Properties of the CsPbBr₃ Quantum Dots Treated by O₃ Plasma for Integration in the Perovskite Solar Cell

Authors: Sh. Sousani, Z. Shadrokh, M. Hofbauerová, J. Kollár, M. Jergel, P. Nádaždy, M. Omastová, E. Majková

Abstract:

Perovskite quantum dots (PQDs) have the potential to increase the performance of the perovskite solar cell (PSCs). The integration of PQDs into PSCs can extend the absorption range and enhance photon harvesting and device efficiency. In addition, PQDs can stabilize the device structure by passivating surface defects and traps in the perovskite layer and enhance its stability. The integration of PQDs into PSCs is strongly affected by the type of ligands on the surface of PQDs. The ligands affect the charge transport properties of PQDs, as well as the formation of well-defined interfaces and stability of PSCs. In this work, the CsPbBr₃ QDs were synthesized by the conventional hot-injection method using cesium oleate, PbBr₂ and two different ligands, namely oleic acid (OA) oleylamine (OAm) and didodecyldimethylammonium bromide (DDAB). The STEM confirmed regular shape and relatively monodisperse cubic structure with an average size of about 10-14 nm of the prepared CsPbBr₃ QDs. Further, the photoluminescent (PL) properties of the PQDs/perovskite bilayer with the ligand OA, OAm and DDAB were studied. For this purpose, ITO/PQDs as well as ITO/PQDs/MAPI perovskite structures were prepared by spin coating and the effect of the ligand and oxygen plasma treatment was analyzed. The plasma treatment of the PQDs layer could be beneficial for the deposition of the MAPI perovskite layer and the formation of a well-defined PQDs/MAPI interface. The absorption edge in UV-Vis absorption spectra for OA, OAm CsPbBr₃ QDs is placed around 513 nm (the band gap 2.38 eV); for DDAB CsPbBr₃ QDs, it is located at 490 nm (the band gap 2.33 eV). The photoluminescence (PL) spectra of CsPbBr₃ QDs show two peaks located around 514 nm (503 nm) and 718 nm (708 nm) for OA, OAm (DDAB). The peak around 500 nm corresponds to the PL of PQDs, and the peak close to 710 nm belongs to the surface states of PQDs for both types of ligands. These surface states are strongly affected by the O₃ plasma treatment. For PQDs with DDAB ligand, the O₃ exposure (5, 10, 15 s) results in the blue shift of the PQDs peak and a non-monotonous change of the amplitude of the surface states' peak. For OA, OAm ligand, the O₃ exposition did not cause any shift of the PQDs peak, and the intensity of the PL peak related to the surface states is lower by one order of magnitude in comparison with DDAB, being affected by O₃ plasma treatment. The PL results indicate the possibility of tuning the position of the PL maximum by the ligand of the PQDs. Similar behavior of the PQDs layer was observed for the ITO/QDs/MAPI samples, where an additional strong PL peak at 770 nm coming from the perovskite layer was observed; for the sample with PQDs with DDAB ligands, a small blue shift of the perovskite PL maximum was observed independently of the plasma treatment. These results suggest the possibility of affecting the PL maximum position and the surface states of the PQDs by the combination of a suitable ligand and the O₃ plasma treatment.

Keywords: perovskite quantum dots, photoluminescence, O₃ plasma., Perovskite Solar Cells

Procedia PDF Downloads 64