Search results for: cost observation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7719

Search results for: cost observation

519 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 114
518 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 313
517 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 69
516 Assessment of Serum Osteopontin, Osteoprotegerin and Bone-Specific Alp as Markers of Bone Turnover in Patients with Disorders of Thyroid Function in Nigeria, Sub-Saharan Africa

Authors: Oluwabori Emmanuel Olukoyejo, Ogra Victor Ogra, Bosede Amodu, Tewogbade Adeoye Adedeji

Abstract:

Background: Disorders of thyroid function are the second most common endocrine disorders worldwide, with a direct relationship with metabolic bone diseases. These metabolic bone complications are often subtle but manifest as bone pains and an increased risk of fractures. The gold standard for diagnosis, Dual Energy X-ray Absorptiometry (DEXA), is limited in this environment due to unavailability, cumbersomeness and cost. However, bone biomarkers have shown prospects in assessing alterations in bone remodeling, which has not been studied in this environment. Aim: This study evaluates serum levels of bone-specific alkaline phosphatase (bone-specific ALP), osteopontin and osteoprotegerin biomarkers of bone turnover in patients with disorders of thyroid function. Methods: This is a cross-sectional study carried out over a period of one and a half years. Forty patients with thyroid dysfunctions, aged 20 to 50 years, and thirty-eight age and sex-matched healthy euthyroid controls were included in this study. Patients were further stratified into hyperthyroid and hypothyroid groups. Bone-specific ALP, osteopontin, and osteoprotegerin, alongside serum total calcium, ionized calcium and inorganic phosphate, were assayed for all patients and controls. A self-administered questionnaire was used to obtain data on sociodemographic and medical history. Then, 5 ml of blood was collected in a plain bottle and serum was harvested following clotting and centrifugation. Serum samples were assayed for B-ALP, osteopontin, and osteoprotegerin using the ELISA technique. Total calcium and ionized calcium were assayed using an ion-selective electrode, while the inorganic phosphate was assayed with automated photometry. Results: The hyperthyroid and hypothyroid patient groups had significantly increased median serum B-ALP (30.40 and 26.50) ng/ml and significantly lower median OPG (0.80 and 0.80) ng/ml than the controls (10.81 and 1.30) ng/ml respectively, p < 0.05. However, serum osteopontin in the hyperthyroid group was significantly higher and significantly lower in the hypothyroid group when compared with the controls (11.00 and 2.10 vs 3.70) ng/ml, respectively, p < 0.05. Both hyperthyroid and hypothyroid groups had significantly higher mean serum total calcium, ionized calcium and inorganic phosphate than the controls (2.49 ± 0.28, 1.27 ± 0.14 and 1.33 ± 0.33) mmol/l and (2.41 ± 0.04, 1.20 ± 0.04 and 1.15 ± 0.16) mmol/l vs (2.27 ± 0.11, 1.17 ± 0.06 and 1.08 ± 0.16) mmol/l respectively, p < 0.05. Conclusion: Patients with disorders of thyroid function have metabolic imbalances of all the studied bone markers, suggesting a higher bone turnover. The routine bone markers will be an invaluable tool for monitoring bone health in patients with thyroid dysfunctions, while the less readily available markers can be introduced as supplementary tools. Moreover, bone-specific ALP, osteopontin and osteoprotegerin were found to be the strongest independent predictors of metabolic bone markers’ derangements in patients with thyroid dysfunctions.

Keywords: metabolic bone diseases, biomarker, bone turnover, hyperthyroid, hypothyroid, euthyroid

Procedia PDF Downloads 36
515 The Positive Impact of Wheelchair Service Provision on the Health and Overall Satisfaction of Wheelchair Users with the Devices

Authors: Archil Undilashvili, Ketevan Stvilia, Dustin Gilbreath, Giorgi Dzneladze, Gordon Charchward

Abstract:

Introduction: In recent years, diverse types of wheelchairs, both local production and imported, have been made available on the Georgian market for wheelchair users. Some types of wheelchairs are sold together with a service package, while the others, including the State Program, Supported locally-produced ones, don’t provide adjustment and maintenance service packages to users. Within the USAID Physical Rehabilitation Project in Georgia, a study was conducted to assess the impact of the wheelchair service provision in line with the WHO guidelines on the health and overall satisfaction of wheelchair users in Georgia. Methodology: A cross-sectional survey was conducted in May 2021. A structured questionnaire was used for telephone interviews that, along with socio-demographic characteristics, included questions for assessment of accessibility, availability, timeliness, cost and quality of wheelchair services received. Out of 1060 individuals listed in the census of wheelchair users, 752 were available for interview, with an actual response rate of 73.4%. 552 wheelchair users (31%) or their caregivers (69%) agreed to participate in the survey. In addition to using descriptive statistics, the study used multivariate matching of wheelchair users who received wheelchair services and who did not (control group). In addition, to evaluate satisfaction with service provision, respondents were asked to assess services. Findings: The majority (67%) of wheelchair users included in the survey were male. The average age of participants was 43. The three most frequently named reasons for using a wheelchair were cerebral palsy (29%), followed by stroke (18%), and amputation (12%). Users have had their current chair for four years on average. Overall, 60% of respondents reported that they were assessed before providing a wheelchair, but only half of them reported that their preferences and needs were considered. Only 13% of respondents had services in line with WHO guidelines and only 22% of wheelchair users had training when they received their current chair. 16% of participants said they had follow-up services, and 41% received adjustment services after receiving the chair. A slight majority (56%) of participants were satisfied with the quality of service provision and the service provision overall. Similarly, 55% were satisfied with the accessibility of service provision. A slightly larger majority (61%) were satisfied with the timeliness of service provision. The matching analysis suggests that users that received services in line with WHO guidelines were more satisfied with their chairs (the difference 17 point/0-100 scale) and they were four percentage points less likely to have health problems attributed to the chair. The regression analysis provides a similar finding of a 21 point increase in satisfaction attributable to services. Conclusion: The provision of wheelchair services in line with WHO guidelines and with follow-up services is likely to have a positive impact on the daily lives of wheelchair users in Georgia. Wheelchair services should be institutionalized as a standard component of wheelchair provision in Georgia.

Keywords: physical rehabilitation, wheelchair users, persons with disabilities, wheelchair production

Procedia PDF Downloads 106
514 Neuropsychiatric Outcomes of Intensive Music Therapy in Stroke Rehabilitation A Premilitary Investigation

Authors: Honey Bryant, Elvina Chu

Abstract:

Stroke is the leading cause of disability in adults in Canada and directly related to depression, anxiety, and sleep disorders; with an estimated annual cost of $50 billion in health care. Strokes not only impact the individual but society as a whole. Current stroke rehabilitation does not include Music Therapy, although it has success in clinical research in the use of stroke rehabilitation. This study examines the use of neurologic music therapy (NMT) in conjunction with stroke rehabilitation to improve sleep quality, reduce stress levels, and promote neurogenesis. Existing research on NMT in stroke is limited, which means any conclusive information gathered during this study will be significant. My novel hypotheses are a.) stroke patients will become less depressed and less anxious with improved sleep following NMT. b.) NMT will reduce stress levels and promote neurogenesis in stroke patients admitted for rehabilitation. c.) Beneficial effects of NMT will be sustained at least short-term following treatment. Participants were recruited from the in-patient stroke rehabilitation program at Providence Care Hospital in Kingston, Ontario, Canada. All participants-maintained stroke rehabilitation treatment as normal. The study was spilt into two groups, the first being Passive Music Listening (PML) and the second Neurologic Music Therapy (NMT). Each group underwent 10 sessions of intensive music therapy lasting 45 minutes for 10 consecutive days, excluding weekends. Psychiatric Assessments, Epworth Sleepiness Scale (ESS), Hospital Anxiety & Depression Rating Scale (HADS), and Music Engagement Questionnaire (MusEQ), were completed, followed by a general feedback interview. Physiological markers of stress were measured through blood pressure measurements and heart rate variability. Serum collections reviewed neurogenesis via Brain-derived neurotrophic factor (BDNF) and stress markers of cortisol levels. As this study is still on-going, a formal analysis of data has not been fully completed, although trends are following our hypotheses. A decrease in sleepiness and anxiety is seen upon the first cohort of PML. Feedback interviews have indicated most participants subjectively felt more relaxed and thought PML was useful in their recovery. If the hypothesis is supported, larger external funding which will allow for greater investigation of the use of NMT in stroke rehabilitation. As we know, NMT is not covered under Ontario Health Insurance Plan (OHIP), so there is limited scientific data surrounding its uses as a clinical tool. This research will provide detailed findings of the treatment of neuropsychiatric aspects of stroke. Concurrently, a passive music listening study is being designed to further review the use of PML in rehabilitation as well.

Keywords: music therapy, psychotherapy, neurologic music therapy, passive music listening, neuropsychiatry, counselling, behavioural, stroke, stroke rehabilitation, rehabilitation, neuroscience

Procedia PDF Downloads 113
513 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending

Authors: Enrico Dorigatti

Abstract:

The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.

Keywords: circuit bending, ecology, sound art, sustainability

Procedia PDF Downloads 171
512 Biodegradable Self-Supporting Nanofiber Membranes Prepared by Centrifugal Spinning

Authors: Milos Beran, Josef Drahorad, Ondrej Vltavsky, Martin Fronek, Jiri Sova

Abstract:

While most nanofibers are produced using electrospinning, this technique suffers from several drawbacks, such as the requirement for specialized equipment, high electrical potential, and electrically conductive targets. Consequently, recent years have seen the increasing emergence of novel strategies in generating nanofibers in a larger scale and higher throughput manner. The centrifugal spinning is simple, cheap and highly productive technology for nanofiber production. In principle, the drawing of solution filament into nanofibers using centrifugal spinning is achieved through the controlled manipulation of centrifugal force, viscoelasticity, and mass transfer characteristics of the spinning solutions. Engineering efforts of researches of the Food research institute Prague and the Czech Technical University in the field the centrifugal nozzleless spinning led to introduction of a pilot plant demonstrator NANOCENT. The main advantages of the demonstrator are lower investment cost - thanks to simpler construction compared to widely used electrospinning equipments, higher production speed, new application possibilities and easy maintenance. The centrifugal nozzleless spinning is especially suitable to produce submicron fibers from polymeric solutions in highly volatile solvents, such as chloroform, DCM, THF, or acetone. To date, submicron fibers have been prepared from PS, PUR and biodegradable polyesters, such as PHB, PLA, PCL, or PBS. The products are in form of 3D structures or nanofiber membranes. Unique self-supporting nanofiber membranes were prepared from the biodegradable polyesters in different mixtures. The nanofiber membranes have been tested for different applications. Filtration efficiencies for water solutions and aerosols in air were evaluated. Different active inserts were added to the solutions before the spinning process, such as inorganic nanoparticles, organic precursors of metal oxides, antimicrobial and wound healing compounds or photocatalytic phthalocyanines. Sintering can be subsequently carried out to remove the polymeric material and transfer the organic precursors to metal oxides, such as Si02, or photocatalytic Zn02 and Ti02, to obtain inorganic nanofibers. Electrospinning is more suitable technology to produce membranes for the filtration applications than the centrifugal nozzleless spinning, because of the formation of more homogenous nanofiber layers and fibers with smaller diameters. The self-supporting nanofiber membranes prepared from the biodegradable polyesters are especially suitable for medical applications, such as wound or burn healing dressings or tissue engineering scaffolds. This work was supported by the research grants TH03020466 of the Technology Agency of the Czech Republic.

Keywords: polymeric nanofibers, self-supporting nanofiber membranes, biodegradable polyesters, active inserts

Procedia PDF Downloads 165
511 Effect of Chemical Fertilizer on Plant Growth-Promoting Rhizobacteria in Wheat

Authors: Tessa E. Reid, Vanessa N. Kavamura, Maider Abadie, Adriana Torres-Ballesteros, Mark Pawlett, Ian M. Clark, Jim Harris, Tim Mauchline

Abstract:

The deleterious effect of chemical fertilizer on rhizobacterial diversity has been well documented using 16S rRNA gene amplicon sequencing and predictive metagenomics. Biofertilization is a cost-effective and sustainable alternative; improving strategies depends on isolating beneficial soil microorganisms. Although culturing is widespread in biofertilization, it is unknown whether the composition of cultured isolates closely mirrors native beneficial rhizobacterial populations. This study aimed to determine the relative abundance of culturable plant growth-promoting rhizobacteria (PGPR) isolates within total soil DNA and how potential PGPR populations respond to chemical fertilization in a commercial wheat variety. It was hypothesized that PGPR will be reduced in fertilized relative to unfertilized wheat. Triticum aestivum cv. Cadenza seeds were sown in a nutrient depleted agricultural soil in pots treated with and without nitrogen-phosphorous-potassium (NPK) fertilizer. Rhizosphere and rhizoplane samples were collected at flowering stage (10 weeks) and analyzed by culture-independent (amplicon sequence variance (ASV) analysis of total rhizobacterial DNA) and -dependent (isolation using growth media) techniques. Rhizosphere- and rhizoplane-derived microbiota culture collections were tested for plant growth-promoting traits using functional bioassays. In general, fertilizer addition decreased the proportion of nutrient-solubilizing bacteria (nitrate, phosphate, potassium, iron and, zinc) isolated from rhizocompartments in wheat, whereas salt tolerant bacteria were not affected. A PGPR database was created from isolate 16S rRNA gene sequences and searched against total soil DNA, revealing that 1.52% of total community ASVs were identified as culturable PGPR isolates. Bioassays identified a higher proportion of PGPR in non-fertilized samples (rhizosphere (49%) and rhizoplane (91%)) compared to fertilized samples (rhizosphere (21%) and rhizoplane (19%)) which constituted approximately 1.95% and 1.25% in non-fertilized and fertilized total community DNA, respectively. The analyses of 16S rRNA genes and deduced functional profiles provide an in-depth understanding of the responses of bacterial communities to fertilizer; this study suggests that rhizobacteria, which potentially benefit plants by mobilizing insoluble nutrients in soil, are reduced by chemical fertilizer addition. This knowledge will benefit the development of more targeted biofertilization strategies.

Keywords: bacteria, fertilizer, microbiome, rhizoplane, rhizosphere

Procedia PDF Downloads 307
510 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 225
509 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets

Authors: Debjit Ray

Abstract:

Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.

Keywords: genomics, pathogens, genome assembly, superbugs

Procedia PDF Downloads 197
508 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 188
507 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 322
506 Effect of Multi-Walled Carbon Nanotubes on Fuel Cell Membrane Performance

Authors: Rabindranath Jana, Biswajit Maity, Keka Rana

Abstract:

The most promising clean energy source is the fuel cell, since it does not generate toxic gases and other hazardous compounds. Again the direct methanol fuel cell (DMFC) is more user-friendly as it is easy to be miniaturized and suited as energy source for automobiles as well as domestic applications and portable devices. And unlike the hydrogen used for some fuel cells, methanol is a liquid that is easy to store and transport in conventional tanks. The most important part of a fuel cell is its membrane. Till now, an overall efficiency for a methanol fuel cell is reported to be about 20 ~ 25%. The lower efficiency of the cell may be due to the critical factors, e.g. slow reaction kinetics at the anode and methanol crossover. The oxidation of methanol is composed of a series of successive reactions creating formaldehyde and formic acid as intermediates that contribute to slow reaction rates and decreased cell voltage. Currently, the investigation of new anode catalysts to improve oxidation reaction rates is an active area of research as it applies to the methanol fuel cell. Surprisingly, there are very limited reports on nanostructured membranes, which are rather simple to manufacture with different tuneable compositions and are expected to allow only the proton permeation but not the methanol due to their molecular sizing effects and affinity to the membrane surface. We have developed a nanostructured fuel cell membrane from polydimethyl siloxane rubber (PDMS), ethylene methyl co-acrylate (EMA) and multi-walled carbon nanotubes (MWNTs). The effect of incorporating different proportions of f-MWNTs in polymer membrane has been studied. The introduction of f-MWNTs in polymer matrix modified the polymer structure, and therefore the properties of the device. The proton conductivity, measured by an AC impedance technique using open-frame and two-electrode cell and methanol permeability of the membranes was found to be dependent on the f-MWNTs loading. The proton conductivity of the membranes increases with increase in concentration of f-MWNTs concentration due to increased content of conductive materials. Measured methanol permeabilities at 60oC were found to be dependant on loading of f-MWNTs. The methanol permeability decreased from 1.5 x 10-6 cm²/s for pure film to 0.8 x 10-7 cm²/s for a membrane containing 0.5wt % f-MWNTs. This is due to increasing proportion of f-MWNTs, the matrix becomes more compact. From DSC melting curves it is clear that the polymer matrix with f-MWNTs is thermally stable. FT-IR studies show good interaction between EMA and f-MWNTs. XRD analysis shows good crystalline behavior of the prepared membranes. Significant cost savings can be achieved when using the blended films which contain less expensive polymers.

Keywords: fuel cell membrane, polydimethyl siloxane rubber, carbon nanotubes, proton conductivity, methanol permeability

Procedia PDF Downloads 413
505 A Mixed-Method Study Exploring Expressive Writing as a Brief Intervention Targeting Mental Health and Wellbeing in Higher Education Students: A Focus on the Qualitative Findings

Authors: Deborah Bailey-Rodriguez, Maria Paula Valdivieso Rueda, Gemma Reynolds

Abstract:

In recent years, the mental health of Higher Education (HE) students has been a growing concern. This has been further exacerbated by the stresses associated with the Covid-19 pandemic, placing students at even greater risk of developing mental health issues. Support available to students in HE tends to follow an established and traditional route. The demands for counseling services have grown, not only with the increase in student numbers but with the number of students seeking support for mental health issues, with 94% of HE institutions recently reporting an increase in the need for counseling services. One way of improving the well-being and mental health of HE students is through the use of brief interventions, such as expressive writing (EW). This intervention involves encouraging individuals to write continuously for at least 15-20 minutes for three to five sessions (often on consecutive days) about their deepest thoughts and feelings to explore significant personal experiences in a meaningful way. Given the brevity, simplicity and cost-effectiveness of EW, this intervention has considerable potential as an intervention for HE populations. The current study, therefore, employed a mixed-methods design to explore the effectiveness of EW in reducing anxiety, general stress, academic stress and depression in HE students while improving well-being. HE students at MDX were randomly assigned to one of three conditions: (1) The UniExp-EW group was required to write about their emotions and thoughts about any stressors they have faced that are directly relevant to their university experience (2) The NonUniExp-EW group was required to write about their emotions and thoughts about any stressors that are NOT directly relevant to their university experience, and (3) The Control group were required to write about how they spent their weekend, with no reference to thoughts or emotions, and without thinking about university. Participants were required to carry out the EW intervention for 15 minutes per day for four consecutive days. Baseline mental health and well-being measures were taken before the intervention via a battery of standardized questionnaires. Following completion of the intervention on day four, participants were required to complete the questionnaires a second time and again one week later. Participants were also invited to attend focus groups to discuss their experience of the intervention. This will allow an in-depth investigation into students’ perceptions of EW as an effective intervention to determine whether they would choose to use this intervention in the future. Preliminary findings will be discussed at the conference as well as a discussion of the important implications of the findings. The study is fundamental because if EW is an effective intervention for improving mental health and well-being in HE students, its brevity and simplicity mean it can be easily implemented and can be freely available to students. Improving the mental health and well-being of HE students can have knock-on implications for improving academic skills and career development.

Keywords: expressive writing, higher education, psychology in education, mixed-methods, mental health, academic stress

Procedia PDF Downloads 69
504 Political Economy and Human Rights Engaging in Conversation

Authors: Manuel Branco

Abstract:

This paper argues that mainstream economics is one of the reasons that can explain the difficulty in fully realizing human rights because its logic is intrinsically contradictory to human rights, most especially economic, social and cultural rights. First, its utilitarianism, both in its cardinal and ordinal understanding, contradicts human rights principles. Maximizing aggregate utility along the lines of cardinal utility is a theoretical exercise that consists in ensuring as much as possible that gains outweigh losses in society. In this process an individual may get worse off, though. If mainstream logic is comfortable with this, human rights' logic does not. Indeed, universality is a key principle in human rights and for this reason the maximization exercise should aim at satisfying all citizens’ requests when goods and services necessary to secure human rights are at stake. The ordinal version of utilitarianism, in turn, contradicts the human rights principle of indivisibility. Contrary to ordinal utility theory that ranks baskets of goods, human rights do not accept ranking when these goods and services are necessary to secure human rights. Second, by relying preferably on market logic to allocate goods and services, mainstream economics contradicts human rights because the intermediation of money prices and the purpose of profit may cause exclusion, thus compromising the principle of universality. Finally, mainstream economics sees human rights mainly as constraints to the development of its logic. According to this view securing human rights would, then, be considered a cost weighing on economic efficiency and, therefore, something to be minimized. Fully realizing human rights needs, therefore, a different approach. This paper discusses a human rights-based political economy. This political economy, among other characteristics should give up mainstream economics narrow utilitarian approach, give up its belief that market logic should guide all exchanges of goods and services between human beings, and finally give up its view of human rights as constraints on rational choice and consequently on good economic performance. Giving up mainstream’s narrow utilitarian approach means, first embracing procedural utility and human rights-aimed consequentialism. Second, a more radical break can be imagined; non-utilitarian, or even anti-utilitarian, approaches may emerge, then, as alternatives, these two standpoints being not necessarily mutually exclusive, though. Giving up market exclusivity means embracing decommodification. More specifically, this means an approach that takes into consideration the value produced outside the market and an allocation process no longer necessarily centered on money prices. Giving up the view of human rights as constraints means, finally, to consider human rights as an expression of wellbeing and a manifestation of choice. This means, in turn, an approach that uses indicators of economic performance other than growth at the macro level and profit at the micro level, because what we measure affects what we do.

Keywords: economic and social rights, political economy, economic theory, markets

Procedia PDF Downloads 152
503 Nutritional Education in Health Resort Institutions in the Face of Demographic and Epidemiological Changes in Poland

Authors: J. Woźniak-Holecka, T. Holecki, S. Jaruga

Abstract:

Spa treatment is an important area of the health care system in Poland due to the increasing needs of the population and the context of historical conditions for this form of therapy. It extends the range of financing possibilities of the outlets and increases the potential of spa services, which is very important in the context of demographic and epidemiological changes. The main advantages of spa treatment services include its relatively wide availability, low risk of side effects, good patient tolerance, long-lasting curative effect and a relatively low cost. In addition, patients should be provided with a proper diet and enable participation in health education and health promotion classes aimed at health problems consistent with the treatment profile. Challenges for global health care systems include a sharp increase in spending on benefits, dynamic development of health technologies and growing social expectations. This requires extending the competences of health resort facilities for health promotion. Within each type of health resort institutions in Poland, nutritional education services are implemented, aimed at creating and consolidating proper eating habits. Choosing the right diet can speed up recovery or become one of the methods to alleviate the symptoms of chronic diseases. During spa treatment patient learns the principles of rational nutrition and adequate dietotherapy to his diseases. The aim of the project is to assess the frequency and quality of nutritional education provided to patients in health resort facilities in a nationwide perspective. The material for the study will be data obtained as part of an in-depth interview conducted among Heads of Nutrition Departments of selected institutions. The use of nutritional education in a health resort may be an important goal of implementing the state health policy as a useful tool to reduce the risk of diet-related diseases. Recognizing nutritional education in health resort institutions as a type of full-value health service can be effective system support for health policy, including seniors, due to demographic changes currently occurring in the Polish population. Furthermore, it is necessary to increase the interest and motivation of patients to follow the recommendations of nutritional education, because it will bring tangible benefits for the long-term effects of therapy and care should be taken for the form and methodology of nutrition education implemented in health resort institutions. Finally it is necessary to construct an educational offer in terms of selected groups of patients with the highest health needs: the elderly and the disabled. In conclusion, it can be said that the system of nutritional education implemented in polish health resort institutions should be subjected to global changes and strong systemic correction.

Keywords: health care system, nutritional education, public health, spa and treatment

Procedia PDF Downloads 114
502 Domestic Trade, Misallocation and Relative Prices

Authors: Maria Amaia Iza Padilla, Ibai Ostolozaga

Abstract:

The objective of this paper is to analyze how transportation costs between regions within a country can affect not only domestic trade but also the allocation of resources in a given region, aggregate productivity, and relative domestic prices (tradable versus non-tradable). On the one hand, there is a vast literature that analyzes the transportation costs faced by countries when trading with the rest of the world. However, this paper focuses on the effect of transportation costs on domestic trade. Countries differ in their domestic road infrastructure and transport quality. There is also some literature that focuses on the effect of road infrastructure on the price difference between regions but not on relative prices at the aggregate level. On the other hand, this work is also related to the literature on resource misallocation. Finally, the paper is also related to the literature analyzing the effect of trade on the development of the manufacturing sector. Using the World Bank Enterprise Survey database, it is observed cross-country differences in the proportion of firms that consider transportation as an obstacle. From the International Comparison Program, we obtain a significant negative correlation between GDP per worker and relative prices (manufacturing sector prices relative to the service sector). Furthermore, there is a significant negative correlation between a country’s transportation quality and the relative price of manufactured goods with respect to the price of services in that country. This is consistent with the empirical evidence of a negative correlation between transportation quality and GDP per worker, on the one hand, and the negative correlation between GDP per worker and domestic relative prices, on the other. It is also shown that in a country, the share of manufacturing firms whose main market is at the local (regional) level is negatively related to the quality of the transportation infrastructure within the country. Similarly, this index is positively related to the share of manufacturing firms whose main market is national or international. The data also shows that those countries with a higher proportion of manufacturing firms operating locally have higher relative prices. With this information in hand, the paper attempts to quantify the effects of the allocation of resources between and within sectors. The higher the trade barriers caused by transportation costs, the less efficient allocation, which causes lower aggregate productivity. Second, it is built a two-sector model where regions within a country trade with each other. On the one hand, it is found that with respect to the manufacturing sector, those countries with less trade between their regions will be characterized by a smaller variety of goods, less productive manufacturing firms on average, and higher relative prices for manufactured goods relative to service sector prices. Thus, the decline in the relative price of manufactured goods in more advanced countries could also be explained by the degree of trade between regions. This trade allows for efficient intra-industry allocation (traders are more productive, and resources are allocated more efficiently)).

Keywords: misallocation, relative prices, TFP, transportation cost

Procedia PDF Downloads 84
501 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 232
500 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 201
499 Investigating Sediment-Bound Chemical Transport in an Eastern Mediterranean Perennial Stream to Identify Priority Pollution Sources on a Catchment Scale

Authors: Felicia Orah Rein Moshe

Abstract:

Soil erosion has become a priority global concern, impairing water quality and degrading ecosystem services. In Mediterranean climates, following a long dry period, the onset of rain occurs when agricultural soils are often bare and most vulnerable to erosion. Early storms transport sediments and sediment-bound pollutants into streams, along with dissolved chemicals. This results in loss of valuable topsoil, water quality degradation, and potentially expensive dredged-material disposal costs. Information on the provenance of fine sediment and priority sources of adsorbed pollutants represents a critical need for developing effective control strategies aimed at source reduction. Modifying sediment traps designed for marine systems, this study tested a cost-effective method to collect suspended sediments on a catchment scale to characterize stream water quality during first-flush storm events in a flashy Eastern Mediterranean coastal perennial stream. This study investigated the Kishon Basin, deploying sediment traps in 23 locations, including 4 in the mainstream and one downstream in each of 19 tributaries, enabling the characterization of sediment as a vehicle for transporting chemicals. Further, it enabled direct comparison of sediment-bound pollutants transported during the first-flush winter storms of 2020 from each of 19 tributaries, allowing subsequent ecotoxicity ranking. Sediment samples were successfully captured in 22 locations. Pesticides, pharmaceuticals, nutrients, and metal concentrations were quantified, identifying a total of 50 pesticides, 15 pharmaceuticals, and 22 metals, with 16 pesticides and 3 pharmaceuticals found in all 23 locations, demonstrating the importance of this transport pathway. Heavy metals were detected in only one tributary, identifying an important watershed pollution source with immediate potential influence on long-term dredging costs. Simultaneous sediment sampling at first flush storms enabled clear identification of priority tributaries and their chemical contributions, advancing a new national watershed monitoring approach, facilitating strategic plan development based on source reduction, and advancing the goal of improving the farm-stream interface, conserving soil resources, and protecting water quality.

Keywords: adsorbed pollution, dredged material, heavy metals, suspended sediment, water quality monitoring

Procedia PDF Downloads 108
498 Conditional Relation between Migration, Demographic Shift and Human Development in India

Authors: Rakesh Mishra, Rajni Singh, Mukunda Upadhyay

Abstract:

Since the last few decades, the prima facie of development has shifted towards the working population in India. There has been a paradigm shift in the development approach with the realization that the present demographic dividend has to be harnessed for sustainable development. Rapid urbanization and improved socioeconomic characteristics experienced within its territory has catalyzed various forms of migration into it, resulting in massive transference of workforce between its states. Workforce in any country plays a very crucial role in deciding development of both the places, from where they have out-migrated and the place they are residing currently. In India, people are found to be migrating from relatively less developed states to a well urbanized and developed state for satisfying their neediness. Linking migration to HDI at place of destination, the regression coefficient (β ̂) shows affirmative association between them, because higher the HDI of the place would be, higher would be chance of earning and hence likeliness of the migrants would be more to choose that place as a new destination and vice versa. So the push factor is compromised by the cost of rearing and provides negative impulse on the in migrants letting down their numbers to metro cities or megacities of the states but increasing their mobility to the suburban areas and vice versa. The main objective of the study is to check the role of migration in deciding the dividend of the place of destination as well as people at the place of their usual residence with special focus to highly urban states in India. Idealized scenario of Indian migrants refers to some new theories in making. On analyzing the demographic dividend of the places we got to know that Uttar Pradesh provides maximum dividend to Maharashtra, West Bengal and Delhi, and the demographic divided of migrants are quite comparable to the native’s shares in the demographic dividend in these places. On analyzing the data from National Sample Survey 64th round and Census of India-2001, we have observed that for males in rural areas, the share of unemployed person declined by 9 percentage points (from 45% before migration to 36 % after migration) and for females in rural areas the decline was nearly 12 percentage points (from 79% before migration to 67% after migration. It has been observed that the shares of unemployed males in both rural and urban areas, which were significant before migration, got reduced after migration while the share of unemployed females in the rural as well as in the urban areas remained almost negligible both for before and after migration. So increase in the number of employed after migration provides an indication of changes in the associated cofactors like health and education of the place of destination and arithmetically to the place from where they have migrated out. This paper presents the evidence on the patterns of prevailing migration dynamics and corresponding demographic benefits in India and its states, examines trends and effects, and discusses plausible explanations.

Keywords: migration, demographic shift, human development index, multilevel analysis

Procedia PDF Downloads 387
497 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 224
496 Modulation of Receptor-Activation Due to Hydrogen Bond Formation

Authors: Sourav Ray, Christoph Stein, Marcus Weber

Abstract:

A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.

Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation

Procedia PDF Downloads 175
495 Evaluation of Tensile Strength of Natural Fibres Reinforced Epoxy Composites Using Fly Ash as Filler Material

Authors: Balwinder Singh, Veerpaul Kaur Mann

Abstract:

A composite material is formed by the combination of two or more phases or materials. Natural minerals-derived Basalt fiber is a kind of fiber being introduced in the polymer composite industry due to its good mechanical properties similar to synthetic fibers and low cost, environment friendly. Also, there is a rising trend towards the use of industrial wastes as fillers in polymer composites with the aim of improving the properties of the composites. The mechanical properties of the fiber-reinforced polymer composites are influenced by various factors like fiber length, fiber weight %, filler weight %, filler size, etc. Thus, a detailed study has been done on the characterization of short-chopped Basalt fiber-reinforced polymer matrix composites using fly ash as filler. Taguchi’s L9 orthogonal array has been used to develop the composites by considering fiber length (6, 9 and 12 mm), fiber weight % (25, 30 and 35 %) and filler weight % (0, 5 and 10%) as input parameters with their respective levels and a thorough analysis on the mechanical characteristics (tensile strength and impact strength) has been done using ANOVA analysis with the help of MINITAB14 software. The investigation revealed that fiber weight is the most significant parameter affecting tensile strength, followed by fiber length and fiber weight %, respectively, while impact characterization showed that fiber length is the most significant factor, followed by fly ash weight, respectively. Introduction of fly ash proved to be beneficial in both the characterization with enhanced values upto 5% fly ash weight. The present study on the natural fibres reinforced epoxy composites using fly ash as filler material to study the effect of input parameters on the tensile strength in order to maximize tensile strength of the composites. Fabrication of composites based on Taguchi L9 orthogonal array design of experiments by using three factors fibre type, fibre weight % and fly ash % with three levels of each factor. The Optimization of composition of natural fibre reinforces composites using ANOVA for obtaining maximum tensile strength on fabricated composites revealed that the natural fibres along with fly ash can be successfully used with epoxy resin to prepare polymer matrix composites with good mechanical properties. Paddy- Paddy fibre gives high elasticity to the fibre composite due to presence of approximately hexagonal structure of cellulose present in paddy fibre. Coir- Coir fibre gives less tensile strength than paddy fibre as Coir fibre is brittle in nature when it pulls breakage occurs showing less tensile strength. Banana- Banana fibre has the least tensile strength in comparison to the paddy & coir fibre due to less cellulose content. Higher fibre weight leads to reduction in tensile strength due to increased nuclei of air pockets. Increasing fly ash content reduces tensile strength due to nonbonding of fly ash particles with natural fibre. Fly ash is also not very strong as compared to the epoxy resin leading to reduction in tensile strength.

Keywords: tensile strength and epoxy resin. basalt Fiber, taguchi, polymer matrix, natural fiber

Procedia PDF Downloads 49
494 A Sustainable Pt/BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ Catalyst for Dry Reforming of Methane-Derived from Recycled Primary Pt

Authors: Alessio Varotto, Lorenzo Freschi, Umberto Pasqual Laverdura, Anastasia Moschovi, Davide Pumiglia, Iakovos Yakoumis, Marta Feroci, Maria Luisa Grilli

Abstract:

Dry reforming of Methane (DRM) is considered one of the most valuable technologies for green-house gas valorization thanks to the fact that through this reaction, it is possible to obtain syngas, a mixture of H₂ and CO in an H₂/CO ratio suitable for utilization in the Fischer-Tropsch process of high value-added chemicals and fuels. Challenges of the DRM process are the reduction of costs due to the high temperature of the process and the high cost of precious metals of the catalyst, the metal particles sintering, and carbon deposition on the catalysts’ surface. The aim of this study is to demonstrate the feasibility of the synthesis of catalysts using a leachate solution containing Pt coming directly from the recovery of spent diesel oxidation catalysts (DOCs) without further purification. An unusual perovskite support for DRM, the BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ (BCZG) perovskite, has been chosen as the catalyst support because of its high thermal stability and capability to produce oxygen vacancies, which suppress the carbon deposition and enhance the catalytic activity of the catalyst. BCZG perovskite has been synthesized by a sol-gel modified Pechini process and calcinated in air at 1100 °C. BCZG supports have been impregnated with a Pt-containing leachate solution of DOC, obtained by a mild hydrometallurgical recovery process, as reported elsewhere by some of the authors of this manuscript. For comparison reasons, a synthetic solution obtained by digesting commercial Pt-black powder in aqua regia was used for BCZG support impregnation. Pt nominal content was 2% in both BCZG-based catalysts formed by real and synthetic solutions. The structure and morphology of catalysts were characterized by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM). Thermogravimetric Analysis (TGA) was used to study the thermal stability of the catalyst’s samples. Brunauer-Emmett-Teller (BET) analysis provided a high surface area of the catalysts. H₂-TPR (Temperature Programmed Reduction) analysis was used to study the consumption of hydrogen for reducibility, and it was associated with H₂-TPD characterization to study the dispersion of Pt on the surface of the support and calculate the number of active sites used by the precious metal. Dry reforming of methane (DRM) reaction, carried out in a fixed bed reactor, showed a high conversion efficiency of CO₂ and CH4. At 850°C, CO₂ and CH₄ conversion were close to 100% for the catalyst obtained with the aqua regia-based solution of commercial Pt-black, and ~70% (for CH₄) and ~80 % (for CO₂) in the case of real HCl-based leachate solution. H₂/CO ratios were ~0.9 and ~0.70 in the first and latter cases, respectively. As far as we know, this is the first pioneering work in which a BCGZ catalyst and a real Pt-containing leachate solution were successfully employed for DRM reaction.

Keywords: dry reforming of methane, perovskite, PGM, recycled Pt, syngas

Procedia PDF Downloads 37
493 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 34
492 Acrylate-Based Photopolymer Resin Combined with Acrylated Epoxidized Soybean Oil for 3D-Printing

Authors: Raphael Palucci Rosa, Giuseppe Rosace

Abstract:

Stereolithography (SLA) is one of the 3D-printing technologies that has been steadily growing in popularity for both industrial and personal applications due to its versatility, high accuracy, and low cost. Its printing process consists of using a light emitter to solidify photosensitive liquid resins layer-by-layer to produce solid objects. However, the majority of the resins used in SLA are derived from petroleum and characterized by toxicity, stability, and recalcitrance to degradation in natural environments. Aiming to develop an eco-friendly resin, in this work, different combinations of a standard commercial SLA resin (Peopoly UV professional) with a vegetable-based resin were investigated. To reach this goal, different mass concentrations (varying from 10 to 50 wt%) of acrylated epoxidized soybean oil (AESO), a vegetable resin produced from soyabean oil, were mixed with a commercial acrylate-based resin. 1.0 wt% of Diphenyl(2,4,6-trimethylbenzoyl) phosphine oxide (TPO) was used as photo-initiator, and the samples were printed using a Peopoly moai 130. The machine was set to operate at standard configurations when printing commercial resins. After the print was finished, the excess resin was drained off, and the samples were washed in isopropanol and water to remove any non-reacted resin. Finally, the samples were post-cured for 30 min in a UV chamber. FT-IR analysis was used to confirm the UV polymerization of the formulated resin with different AESO/Peopoly ratios. The signals from 1643.7 to 1616, which corresponds to the C=C stretching of the AESO acrylic acids and Peopoly acrylic groups, significantly decreases after the reaction. The signal decrease indicates the consumption of the double bonds during the radical polymerization. Furthermore, the slight change of the C-O-C signal from 1186.1 to 1159.9 decrease of the signals at 809.5 and 983.1, which corresponds to unsaturated double bonds, are both proofs of the successful polymerization. Mechanical analyses showed a decrease of 50.44% on tensile strength when adding 10 wt% of AESO, but it was still in the same range as other commercial resins. The elongation of break increased by 24% with 10 wt% of AESO and swelling analysis showed that samples with a higher concentration of AESO mixed absorbed less water than their counterparts. Furthermore, high-resolution prototypes were printed using both resins, and visual analysis did not show any significant difference between both products. In conclusion, the AESO resin was successful incorporated into a commercial resin without affecting its printability. The bio-based resin showed lower tensile strength than the Peopoly resin due to network loosening, but it was still in the range of other commercial resins. The hybrid resin also showed better flexibility and water resistance than Peopoly resin without affecting its resolution. Finally, the development of new types of SLA resins is essential to provide new sustainable alternatives to the commercial petroleum-based ones.

Keywords: 3D-printing, bio-based, resin, soybean, stereolithography

Procedia PDF Downloads 128
491 Mega Sporting Events and Branding: Marketing Implications for the Host Country’s Image

Authors: Scott Wysong

Abstract:

Qatar will spend billions of dollars to host the 2022 World Cup. While football fans around the globe get excited to cheer on their favorite team every four years, critics debate the merits of a country hosting such an expensive and large-scale event. That is, the host countries spend billions of dollars on stadiums and infrastructure to attract these mega sporting events with the hope of equitable returns in economic impact and creating jobs. Yet, in many cases, the host countries are left in debt with decaying venues. There are benefits beyond the economic impact of hosting mega-events. For example, citizens are often proud of their city/country to host these famous events. Yet, often overlooked in the literature is the proposition that serving as the host for a mega-event may enhance the country’s brand image, not only as a tourist destination but for the products made in that country of origin. This research aims to explore this phenomenon by taking an exploratory look at consumer perceptions of three host countries of a mega-event in sports. In 2014, the U.S., Chinese and Finn (Finland) consumer attitudes toward Brazil and its products were measured before and after the World Cup via surveys (n=89). An Analysis of Variance (ANOVA) revealed that there were no statistically significant differences in the pre-and post-World Cup perceptions of Brazil’s brand personality or country-of-origin image. After the World Cup in 2018, qualitative interviews were held with U.S. sports fans (n=17) in an effort to further explore consumer perceptions of products made in the host country: Russia. A consistent theme of distrust and corruption with Russian products emerged despite their hosting of this prestigious global event. In late 2021, U.S. football (soccer) fans (n=42) and non-fans (n=37) were surveyed about the upcoming 2022 World Cup. A regression analysis revealed that how much an individual indicated that they were a soccer fan did not significantly influence their desire to visit Qatar or try products from Qatar in the future even though the country was hosting the World Cup—in the end, hosting a mega-event as grand as the World Cup showcases the country to the world. However, it seems to have little impact on consumer perceptions of the country, as a whole, or its brands. That is, the World Cup appeared to enhance already pre-existing stereotypes about Brazil (e.g., beaches, partying and fun, yet with crime and poverty), Russia (e.g., cold weather, vodka and business corruption) and Qatar (desert and oil). Moreover, across all three countries, respondents could rarely name a brand from the host country. Because mega-events cost a lot of time and money, countries need to do more to market their country and its brands when hosting. In addition, these countries would be wise to measure the impact of the event from different perspectives. Hence, we put forth a comprehensive future research agenda to further the understanding of how countries, and their brands, can benefit from hosting a mega sporting event.

Keywords: branding, country-of-origin effects, mega sporting events, return on investment

Procedia PDF Downloads 281
490 The Affordances and Challenges of Online Learning and Teaching for Secondary School Students

Authors: Hahido Samaras

Abstract:

In many cases, especially with the pandemic playing a major role in fast-tracking the growth of the digital industry, online learning has become a necessity or even a standard educational model nowadays, reliably overcoming barriers such as location, time and cost and frequently combined with a face-to-face format (e.g., in blended learning). This being the case, it is evident that students in many parts of the world, as well as their parents, will increasingly need to become aware of the pros and cons of online versus traditional courses. This fast-growing mode of learning, accelerated during the years of the pandemic, presents an abundance of exciting options especially matched for a large number of secondary school students in remote places of the world where access to stimulating educational settings and opportunities for a variety of learning alternatives are scarce, adding advantages such as flexibility, affordability, engagement, flow and personalization of the learning experience. However, online learning can also present several challenges, such as a lack of student motivation and social interactions in natural settings, digital literacy, and technical issues, to name a few. Therefore, educational researchers will need to conduct further studies focusing on the benefits and weaknesses of online learning vs. traditional learning, while instructional designers propose ways of enhancing student motivation and engagement in virtual environments. Similarly, teachers will be required to become more and more technology-capable, at the same time developing their knowledge about their students’ particular characteristics and needs so as to match them with the affordances the technology offers. And, of course, schools, education programs, and policymakers will have to invest in powerful tools and advanced courses for online instruction. By developing digital courses that incorporate intentional opportunities for community-building and interaction in the learning environment, as well as taking care to include built-in design principles and strategies that align learning outcomes with learning assignments, activities, and assessment practices, rewarding academic experiences can derive for all students. This paper raises various issues regarding the effectiveness of online learning on students by reviewing a large number of research studies related to the usefulness and impact of online learning following the COVID-19-induced digital education shift. It also discusses what students, teachers, decision-makers, and parents have reported about this mode of learning to date. Best practices are proposed for parties involved in the development of online learning materials, particularly for secondary school students, as there is a need for educators and developers to be increasingly concerned about the impact of virtual learning environments on student learning and wellbeing.

Keywords: blended learning, online learning, secondary schools, virtual environments

Procedia PDF Downloads 100