Search results for: Stuart Alexander Cook
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 396

Search results for: Stuart Alexander Cook

156 Content Analysis of ‘Junk Food’ Content in Children’s TV Programmes: A Comparison of UK Broadcast TV and Video-On-Demand Services

Authors: Shreesh Sinha, Alexander B. Barker, Megan Parkin, Emma Wilson, Rachael L. Murray

Abstract:

Background and Objectives: Exposure to HFSS imagery is associated with the consumption of foods high in fat, sugar or salt (HFSS), and subsequently obesity, among young people. We report and compare the results of two content analyses, one of two popular terrestrial children's television channels in the UK and the other of a selection of children's programmes available on video-on-demand (VOD) streaming sites. Methods: Content analysis of three days' worth of programmes (including advertisements) on two popular children's television channels broadcast on UK television (CBeebies and Milkshake) as well as a sample of 40 highest-rated children's programmes available on the VOD platforms, Netflix and Amazon Prime, using 1-minute interval coding. Results: HFSS content was seen in 181 broadcasts (36%) and in 417 intervals (13%) on terrestrial television, 'Milkshake' had a significantly higher proportion of programmes/adverts which contained HFSS content than 'CBeebies'. In VOD platforms, HFSS content was seen in 82 episodes (72% of the total number of episodes), across 459 intervals (19% of the total number of intervals), with no significant difference in the proportion of programmes containing HFSS content between Netflix and Amazon Prime. Conclusions: This study demonstrates that HFSS content is common in both popular UK children's television channels and children's programmes on VOD services. Since previous research has shown that HFSS content in the media has an effect on HFSS consumption, children's television programmes broadcast either on TV or VOD services are likely to have an effect on HFSS consumption in children, and legislative opportunities to prevent this exposure are being missed.

Keywords: public health, junk food, children's TV, HFSS

Procedia PDF Downloads 66
155 Compliance of Systematic Reviews in Plastic Surgery with the PRISMA Statement: A Systematic Review

Authors: Seon-Young Lee, Harkiran Sagoo, Katherine Whitehurst, Georgina Wellstead, Alexander Fowler, Riaz Agha, Dennis Orgill

Abstract:

Introduction: Systematic reviews attempt to answer research questions by synthesising the data within primary papers. They are an increasingly important tool within evidence-based medicine, guiding clinical practice, future research and healthcare policy. We sought to determine the reporting quality of recent systematic reviews in plastic surgery. Methods: This systematic review was conducted in line with the Cochrane handbook, reported in line with the PRISMA statement and registered at the ResearchRegistry (UIN: reviewregistry18). MEDLINE and EMBASE databases were searched in 2013 and 2014 for systematic reviews by five major plastic surgery journals. Screening, identification and data extraction was performed independently by two teams. Results: From an initial set of 163 articles, 79 met the inclusion criteria. The median PRISMA score was 16 out of 27 items (59.3%; range 6-26, 95% CI 14-17). Compliance between individual PRISMA items showed high variability. It was poorest for items related to the use of review protocol (item 5; 5%) and presentation of data on risk of bias of each study (item 19; 18%), while being the highest for description of rationale (item 3; 99%) and sources of funding and other support (item 27; 95%), and for structured summary in the abstract (item 2; 95%). Conclusion: The reporting quality of systematic reviews in plastic surgery requires improvement. ‘Hard-wiring’ of compliance through journal submission systems, as well as improved education, awareness and a cohesive strategy among all stakeholders is called for.

Keywords: PRISMA, reporting quality, plastic surgery, systematic review, meta-analysis

Procedia PDF Downloads 268
154 The Use of Prestige Language in Tennessee Williams’s "A Streetcar Named Desire"

Authors: Stuart Noel

Abstract:

In a streetcar Named Desire, Tennessee Williams presents Blanche DuBois, a most complex and intriguing character who often uses prestige language to project the image of an upper-class speaker and to disguise her darker and complicated self. She embodies various fascinating and contrasting characteristics. Like New Orleans (the locale of the play), Blanche represents two opposing images. One image projects that of genteel, Southern charm and beauty, speaking formally and using prestige language and what some linguists refer to as “hypercorrection,” and the other image reveals that of a soiled, deteriorating façade, full of decadence and illusion. Williams said on more than one occasion that Blanche’s use of such language was a direct reflection of her personality and character (as a high school English teacher). Prestige language is an exaggeratedly elevated, pretentious, and oftentimes melodramatic form of one’s language incorporating superstandard or more standard speech than usual in order to project a highly authoritative individual identity. Speech styles carry personal identification meaning not only because they are closely associated with certain social classes but because they tend to be associated with certain conversational contexts. Features which may be considered to be “elaborated” in form (for example, full forms vs. contractions) tend to cluster together in speech registers/styles which are typically considered to be more formal and/or of higher social prestige, such as academic lectures and news broadcasts. Members of higher social classes have access to the elaborated registers which characterize formal writings and pre-planned speech events, such as lectures, while members of lower classes are relegated to using the more economical registers associated with casual, face-to-face conversational interaction, since they do not participate in as many planned speech events as upper-class speakers. Tennessee Williams’s work is characteristically concerned with the conflict between the illusions of an individual and the reality of his/her situation equated with a conflict between truth and beauty. An examination of Blanche DuBois reveals a recurring theme of art and decay and the use of prestige language to reveal artistry in language and to hide a deteriorating self. His graceful and poetic writing personifies her downfall and deterioration. Her loneliness and disappointment are the things so often strongly feared by the sensitive artists and heroes in the world. Hers is also a special and delicate human spirit that is often misunderstood and repressed by society. Blanche is afflicted with a psychic illness growing out of her inability to face the harshness of human existence. She is a sensitive, artistic, and beauty-haunted creature who is avoiding her own humanity while hiding behind her use of prestige language. And she embodies a partial projection of Williams himself.

Keywords: American drama, prestige language, Southern American literature, Tennessee Williams

Procedia PDF Downloads 349
153 Estimation of Endogenous Brain Noise from Brain Response to Flickering Visual Stimulation Magnetoencephalography Visual Perception Speed

Authors: Alexander N. Pisarchik, Parth Chholak

Abstract:

Intrinsic brain noise was estimated via magneto-encephalograms (MEG) recorded during perception of flickering visual stimuli with frequencies of 6.67 and 8.57 Hz. First, we measured the mean phase difference between the flicker signal and steady-state event-related field (SSERF) in the occipital area where the brain response at the flicker frequencies and their harmonics appeared in the power spectrum. Then, we calculated the probability distribution of the phase fluctuations in the regions of frequency locking and computed its kurtosis. Since kurtosis is a measure of the distribution’s sharpness, we suppose that inverse kurtosis is related to intrinsic brain noise. In our experiments, the kurtosis value varied among subjects from K = 3 to K = 5 for 6.67 Hz and from 2.6 to 4 for 8.57 Hz. The majority of subjects demonstrated leptokurtic kurtosis (K < 3), i.e., the distribution tails approached zero more slowly than Gaussian. In addition, we found a strong correlation between kurtosis and brain complexity measured as the correlation dimension, so that the MEGs of subjects with higher kurtosis exhibited lower complexity. The obtained results are discussed in the framework of nonlinear dynamics and complex network theories. Specifically, in a network of coupled oscillators, phase synchronization is mainly determined by two antagonistic factors, noise, and the coupling strength. While noise worsens phase synchronization, the coupling improves it. If we assume that each neuron and each synapse contribute to brain noise, the larger neuronal network should have stronger noise, and therefore phase synchronization should be worse, that results in smaller kurtosis. The described method for brain noise estimation can be useful for diagnostics of some brain pathologies associated with abnormal brain noise.

Keywords: brain, flickering, magnetoencephalography, MEG, visual perception, perception time

Procedia PDF Downloads 111
152 Sports Business Services Model: A Research Model Study in Reginal Sport Authority of Thailand

Authors: Siriraks Khawchaimaha, Sangwian Boonto

Abstract:

Sport Authority of Thailand (SAT) is the state enterprise, promotes and supports all sports kind both professional and athletes for competitions, and administer under government policy and government officers and therefore, all financial supports whether cash inflows and cash outflows are strictly committed to government budget and limited to the planned projects at least 12 to 16 months ahead of reality, as results of ineffective in sport events, administration and competitions. In order to retain in the sports challenges around the world, SAT need to has its own sports business services model by each stadium, region and athletes’ competencies. Based on the HMK model of Khawchaimaha, S. (2007), this research study is formalized into each 10 regional stadiums to details into the characteristics root of fans, athletes, coaches, equipments and facilities, and stadiums. The research designed is firstly the evaluation of external factors: hardware whereby competition or practice of stadiums, playground, facilities, and equipments. Secondly, to understand the software of the organization structure, staffs and management, administrative model, rules and practices. In addition, budget allocation and budget administration with operating plan and expenditure plan. As results for the third step, issues and limitations which require action plan for further development and support, or to cease that unskilled sports kind. The final step, based on the HMK model and modeling canvas by Alexander O and Yves P (2010) are those of template generating Sports Business Services Model for each 10 SAT’s regional stadiums.

Keywords: HMK model, not for profit organization, sport business model, sport services model

Procedia PDF Downloads 281
151 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 229
150 A Content Analysis of ‘Junk Food’ Content in Children’s TV Programs: A Comparison of UK Broadcast TV and Video-On-Demand Services

Authors: Alexander B. Barker, Megan Parkin, Shreesh Sinha, Emma Wilson, Rachael L. Murray

Abstract:

Objectives: Exposure to HFSS imagery is associated with consumption of foods high in fat, sugar, or salt (HFSS), and subsequently obesity, among young people. We report and compare the results of two content analyses, one of two popular terrestrial children’s television channels in the UK and the other of a selection of children’s programs available on video-on-demand (VOD) streaming sites. Design: Content analysis of three days’ worth of programs (including advertisements) on two popular children’s television channels broadcast on UK television (CBeebies and Milkshake) as well as a sample of 40 highest-rated children’s programs available on the VOD platforms, Netflix and Amazon Prime, using 1-minute interval coding. Setting: United Kingdom, Participants: None. Results: HFSS content was seen in 181 broadcasts (36%) and in 417 intervals (13%) on terrestrial television, ‘Milkshake’ had a significantly higher proportion of programs/adverts which contained HFSS content than ‘CBeebies’. In VOD platforms, HFSS content was seen in 82 episodes (72% of the total number of episodes), across 459 intervals (19% of the total number of intervals), with no significant difference in the proportion of programs containing HFSS content between Netflix and Amazon Prime. Conclusions: This study demonstrates that HFSS content is common in both popular UK children’s television channels and children's programs on VOD services. Since previous research has shown that HFSS content in the media has an effect on HFSS consumption, children’s television programs broadcast either on TV or VOD services are likely having an effect on HFSS consumption in children and legislative opportunities to prevent this exposure are being missed.

Keywords: public health, epidemiology, obesity, content analysis

Procedia PDF Downloads 149
149 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 107
148 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria

Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov

Abstract:

This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.

Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model

Procedia PDF Downloads 21
147 Modified Clusterwise Regression for Pavement Management

Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella

Abstract:

Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.

Keywords: clusterwise regression, pavement management system, performance model, optimization

Procedia PDF Downloads 226
146 More Precise: Patient-Reported Outcomes after Stroke

Authors: Amber Elyse Corrigan, Alexander Smith, Anna Pennington, Ben Carter, Jonathan Hewitt

Abstract:

Background and Purpose: Morbidity secondary to stroke is highly heterogeneous, but it is important to both patients and clinicians in post-stroke management and adjustment to life after stroke. The consideration of post-stroke morbidity clinically and from the patient perspective has been poorly measured. The patient-reported outcome measures (PROs) in morbidity assessment help improve this knowledge gap. The primary aim of this study was to consider the association between PRO outcomes and stroke predictors. Methods: A multicenter prospective cohort study assessed 549 stroke patients at 19 hospital sites across England and Wales during 2019. Following a stroke event, demographic, clinical, and PRO measures were collected. Prevalence of morbidity within PRO measures was calculated with associated 95% confidence intervals. Predictors of domain outcome were calculated using a multilevel generalized linear model. Associated P -values and 95% confidence intervals are reported. Results: Data were collected from 549 participants, 317 men (57.7%) and 232 women (42.3%) with ages ranging from 25 to 97 (mean 72.7). PRO morbidity was high post-stroke; 93.2% of the cohort report post-stroke PRO morbidity. Previous stroke, diabetes, and gender are associated with worse patient-reported outcomes across both the physical and cognitive domains. Conclusions: This large-scale multicenter cohort study illustrates the high proportion of morbidity in PRO measures. Further, we demonstrate key predictors of adverse outcomes (Diabetes, previous stroke, and gender) congruence with clinical predictors. The PRO has been demonstrated to be an informative and useful stroke when considering patient-reported outcomes and has wider implications for considerations of PROs in clinical management. Future longitudinal follow-up with PROs is needed to consider association of long-term morbidity.

Keywords: morbidity, patient-reported outcome, PRO, stroke

Procedia PDF Downloads 103
145 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms

Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann

Abstract:

Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.

Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI

Procedia PDF Downloads 151
144 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing

Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill

Abstract:

In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.

Keywords: idea ontology, innovation management, semantic search, open information extraction

Procedia PDF Downloads 164
143 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 303
142 Effect on the Integrity of the DN300 Pipe and Valves in the Cooling Water System Imposed by the Pipes and Ventilation Pipes above in an Earthquake Situation

Authors: Liang Zhang, Gang Xu, Yue Wang, Chen Li, Shao Chong Zhou

Abstract:

Presently, more and more nuclear power plants are facing the issue of life extension. When a nuclear power plant applies for an extension of life, its condition needs to meet the current design standards, which is not fine for all old reactors, typically for seismic design. Seismic-grade equipment in nuclear power plants are now generally placed separately from the non-seismic-grade equipment, but it was not strictly required before. Therefore, it is very important to study whether non-seismic-grade equipment will affect the seismic-grade equipment when dropped down in an earthquake situation, which is related to the safety of nuclear power plants and future life extension applications. This research was based on the cooling water system with the seismic and non-seismic grade equipment installed together, as an example to study whether the non-seismic-grade equipment such as DN50 fire pipes and ventilation pipes arranged above will damage the DN300 pipes and valves arranged below when earthquakes occur. In the study, the simulation was carried out by ANSYS / LY-DYNA, and Johnson-Cook was used as the material model and failure model. For the experiments, the relative positions of objects in the room were restored by 1: 1. In the experiment, the pipes and valves were filled with water with a pressure of 0.785 MPa. The pressure-holding performance of the pipe was used as a criterion for damage. In addition to the pressure-holding performance, the opening torque was considered as well for the valves. The research results show that when the 10-meter-long DN50 pipe was dropped from the position of 8 meters height and the 8-meter-long air pipe dropped from a position of 3.6 meters height, they do not affect the integrity of DN300 pipe below. There is no failure phenomenon in the simulation as well. After the experiment, the pressure drop in two hours for the pipe is less than 0.1%. The main body of the valve does not fail either. The opening torque change after the experiment is less than 0.5%, but the handwheel of the valve may break, which affects the opening actions. In summary, impacts of the upper pipes and ventilation pipes dropdown on the integrity of the DN300 pipes and valves below in a cooling water system of a typical second-generation nuclear power plant under an earthquake was studied. As a result, the functionality of the DN300 pipeline and the valves themselves are not significantly affected, but the handwheel of the valve or similar articles can probably be broken and need to take care.

Keywords: cooling water system, earthquake, integrity, pipe and valve

Procedia PDF Downloads 86
141 Photocaged Carbohydrates: Versatile Tools for Biotechnological Applications

Authors: Claus Bier, Dennis Binder, Alexander Gruenberger, Dagmar Drobietz, Dietrich Kohlheyer, Anita Loeschcke, Karl Erich Jaeger, Thomas Drepper, Joerg Pietruszka

Abstract:

Light absorbing chromophoric systems are important optogenetic tools for biotechnical and biophysical investigations. Processes such as fluorescence or photolysis can be triggered by light-absorption of chromophores. These play a central role in life science. Photocaged compounds belong to such chromophoric systems. The photo-labile protecting groups enable them to release biologically active substances with high temporal and spatial resolution. The properties of photocaged compounds are specified by the characteristics of the caging group as well as the characteristics of the linked effector molecule. In our research, we work with different types of photo-labile protecting groups and various effector molecules giving us possible access to a large library of caged compounds. As a function of the caged effector molecule, a nearly limitless number of biological systems can be directed. Our main interest focusses on photocaging carbohydrates (e.g. arabinose) and their derivatives as effector molecules. Based on these resulting photocaged compounds a precisely controlled photoinduced gene expression will give us access to studies of numerous biotechnological and synthetic biological applications. It could be shown, that the regulation of gene expression via light is possible with photocaged carbohydrates achieving a higher-order control over this processes. With the one-step cleavable photocaged carbohydrate, a homogeneous expression was achieved in comparison to free carbohydrates.

Keywords: bacterial gene expression, biotechnology, caged compounds, carbohydrates, optogenetics, photo-removable protecting group

Procedia PDF Downloads 193
140 Biological Studies of N-O Donor 4-Acypyrazolone Heterocycle and Its Pd/Pt Complexes of Therapeutic Importance

Authors: Omoruyi Gold Idemudia, Alexander P. Sadimenko

Abstract:

The synthesis of N-heterocycles with novel properties, having broad spectrum biological activities that may become alternative medicinal drugs, have been attracting a lot of research attention due to the emergence of medicinal drug’s limitations such as disease resistance and their toxicity effects among others. Acylpyrazolones have been employed as pharmaceuticals as well as analytical reagent and their application as coordination complexes with transition metal ions have been well established. By way of a condensation reaction with amines acylpyrazolone ketones form a more chelating and superior group of compounds known as azomethines. 4-propyl-3-methyl-1-phenyl-2-pyrazolin-5-one was reacted with phenylhydrazine to get a new phenylhydrazone which was further reacted with aqueous solutions of palladium and platinum salts, in an effort towards the discovery of transition metal based synthetic drugs. The compounds were characterized by means of analytical, spectroscopic, thermogravimetric analysis TGA, as well as x-ray crystallography. 4-propyl-3-methyl-1-phenyl-2-pyrazolin-5-one phenylhydrazone crystallizes in a triclinic crystal system with a P-1 (No. 2) space group based on x-ray crystallography. The bidentate ON ligand formed a square planar geometry on coordinating with metal ions based on FTIR, electronic and NMR spectra as well as magnetic moments. Reported compounds showed antibacterial activities against the nominated bacterial isolates using the disc diffusion technique at 20 mg/ml in triplicates. The metal complexes exhibited a better antibacterial activity with platinum complex having an MIC value of 0.63 mg/ml. Similarly, ligand and complexes also showed antioxidant scavenging properties against 2, 2-diphenyl-1-picrylhydrazyl DPPH radical at 0.5mg/ml relative to ascorbic acid (standard drug).

Keywords: acylpyrazolone, antibacterial studies, metal complexes, phenylhydrazone, spectroscopy

Procedia PDF Downloads 225
139 A Review on Benzo(a)pyrene Emission Factors from Biomass Combustion

Authors: Franziska Klauser, Manuel Schwabl, Alexander Weissinger, Christoph Schmidl, Walter Haslinger, Anne Kasper-Giebl

Abstract:

Benzo(a)pyrene (BaP) is the most widely investigated representative of Polycyclic Aromatic Hydrocarbons (PAH) as well as one of the most toxic compounds in this group. Since 2013 in the European Union a limit value for BaP concentration in the ambient air is applied, which was set to a yearly average value of 1 ng m-3. Several reports show that in some regions, even where industry and traffic are of minor impact this threshold is regularly exceeded. This is taken as proof that biomass combustion for heating purposes contributes significantly to BaP pollution. Several investigations have been already carried out on the BaP emission behavior of biomass combustion furnaces, mostly focusing on a certain aspect like the influences from wood type, of operation type or of technology type. However, a superior view on emission patterns of BaP from biomass combustion and the aggregation of determined values also from recent studies is not presented so far. The combination of determined values allows a better understanding of the BaP emission behavior from biomass combustion. In this work the review conclusions are driven from the combination of outcomes from different publication. In two examples it was shown that technical progress leads to 10 to 100 fold lower BaP emission from modern furnaces compared to old technologies of equivalent type. It was also indicated that the operation with pellets or wood chips exhibits clearly lower BaP emission factors compared to operation with log wood. Although, the BaP emission level from automatic furnaces is strongly impacted by the kind of operation. This work delivers an overview on BaP emission factors from different biomass combustion appliances, from different operation modes and from the combustion of different fuel and wood types. The main impact factors are depicted, and suggestions for low BaP emission biomass combustion are derived. As one result possible investigation fields concerning BaP emissions from biomass combustion that seem to be most important to be clarified are suggested.

Keywords: benzo(a)pyrene, biomass, combustion, emission, pollution

Procedia PDF Downloads 334
138 The Politics of Identity: A Longitudinal Study of the Sociopolitical Development of Education Leaders

Authors: Shelley Zion

Abstract:

This study examines the longitudinal impact (10 years) of a course for education leaders designed to encourage the development of critical consciousness surrounding issues of equity, oppression, power, and privilege. The ability to resist and challenge oppression across social and cultural contexts can be acquired through the use of transformative pedagogies that create spaces that use the practice of exploration to make connections between pervasive structural and institutional practices and race and ethnicity. This study seeks to extend this understanding by exploring the longitudinal influence of participating in a course that utilizes transformative pedagogies, course materials, exercises, and activities to encourage the practice of exploration of student experiences with racial and ethnic discrimination with the end goal of providing them with the necessary knowledge and skills that foster their ability to resist and challenge oppression and discrimination -critical action- in their lives. To this end, we use the explanatory power of the theories of critical consciousness development, sociopolitical development, and social identity construction that view exploration as a crucial practice in understanding the role ethnic and racial differences play in creating opportunities or barriers in the lives of individuals. When educators use transformative pedagogies, they create a space where students collectively explore their experiences with racial and ethnic discrimination through course readings, in-class activities, and discussions. The end goal of this exploration is twofold: first, to encourage the student’s ability to understand how differences are identified, given meaning to, and used to position them in specific places and spaces in their world; second, to scaffold students’ ability to make connections between their individual and collective differences and particular institutional and structural practices that create opportunities or barriers in their lives. Studies have found the formal exploration of students’ individual and collective differences in relation to their experiences with racial and ethnic discrimination results in developing an understanding of the roles race and ethnicity play in their lives. To trace the role played by exploration in identity construction, we utilize an integrative approach to identity construction informed by multiple theoretical frameworks grounded in cultural studies, social psychology, and sociology that understand social-cultural, racial, and ethnic -identities as dynamic and ever-changing based on context-specific environments. Stuart Hall refers to this practice as taking “symbolic detours through the past” while reflecting on the different ways individuals have been positioned based on their roots (group membership) and also how they, in turn, chose to position themselves through collective sense-making of the various meanings their differences carried through the routes they have taken. The practice of exploration in the construction of ethnic-racial identities has been found to be beneficial to sociopolitical development.

Keywords: political polarization, civic participation, democracy, education

Procedia PDF Downloads 24
137 Association of Vascular Endothelial Growth Factor Gene +405 C>G and -460 T>C Polymorphism with Type 2 Diabetic Foot Ulcer Patient in Cipto Mangunkusumo National Hospital Jakarta

Authors: Dedy Pratama, Akhmadu Muradi, Hilman Ibrahim, Raden Suhartono, Alexander Jayadi Utama, Patrianef Darwis, S. Dwi Anita, Luluk Yunaini, Kemas Dahlan

Abstract:

Introduction: Vascular endothelial growth factor (VEGF) gene shows association with various angiogenesis conditions including Diabetic Foot Ulcer (DFU) disease. In this study, we performed this study to examine VEGF gene polymorphism associated with DFU. Methods: Case-control study of polymorphism of VEGF gene +405 C>G and -460 T>C, of diabetes mellitus (DM) type 2 with Diabetic Foot Ulcer (DFU) in Cipto Mangunkusumo National Hospital (RSCM) Jakarta from June to December 2016. Results: There were 203 patients, 102 patients with DFU and 101 patients without DFU. Forty-nine point 8 percent of total samples is male and 50,2% female with mean age 56,06 years. Distribution of the wild-type genotype VEGF +405 C>G wild type CC was found in 6,9% of respondents, the number of mutant heterozygote CG was 69,5% and mutant homozygote GG was 19,7%. Cumulatively, there were 6,9% wild-type and 85,2% mutant and 3,9% of total blood samples could not be detected on PCR-RFLP. Distribution of VEGF allele +405 C>G C alleles were 43% and G alleles were 57%. Distribution of genotype from VEGF gene -460 T>C is wild type TT 42,9%, mutant heterozygote TC 37,9% and mutant homozygote CC 13,3%. Cumulatively, there were 42,9% wild-type and 51% mutant type. Distribution of VEGF -460 T>C were 62% T allele and 38% C allele. Conclusion: In this study we found the distribution of alleles from VEGF +405 C>G is C 43% and G 57% and from VEGF -460 T>C; T 62% and C 38%. We propose that G allele in VEGF +405 C>G can act as a protective allele and on the other hands T allele in VEGF -460 T>C could be acted as a risk factor for DFU in diabetic patients.

Keywords: diabetic foot ulcer, diabetes mellitus, polymorphism, VEGF

Procedia PDF Downloads 264
136 Virtue, Truth, Freedom, And The History Of Philosophy

Authors: Ashley DelCorno

Abstract:

GEM Anscombe’s 1958 essay Modern Moral Philosophy and the tradition of virtue ethics that followed has given rise to the restoration (or, more plainly, the resurrection) of Aristotle as something of an authority figure. Alisdair MacIntyre and Martha Nussbaum are proponents, for example, not just of Aristotle’s relevancy but also of his apparent implicit authority. That said, it’s not clear that the schema imagined by virtue ethicists accurately describes moral life or that it does not inadvertently work to impoverish genuine decision-making. If the label ‘virtue’ is categorically denied to some groups (while arbitrarily afforded to others), it can only turn on itself, thus rendering ridiculous its own premise. Likewise, as an inescapable feature of virtue ethics, Aristotelean binaries like ‘virtue/vice’ and ‘voluntary/involuntary’ offer up false dichotomies that may seriously compromise an agent’s ability to conceptualize choices that are truly free and rooted in meaningful criteria. Here, this topic is analyzed through a feminist lens predicated on the known paradoxes of patriarchy. The work of feminist theorists Jacqui Alexander, Katharine Angel, Simone de Beauvoir, bell hooks, Audre Lorde, Imani Perry, and Amia Srinivasan serves as important guideposts, and the argument here is built from a key tenet of black feminist thought regarding scarcity and possibility. Above all, it’s clear that though the philosophical tradition of virtue ethics presents itself as recovering the place of agency in ethics, its premises possess crippling limitations toward the achievement of this goal. These include, most notably, virtue ethics’ binding analysis of history, as well as its axiomatic attachment to obligatory clauses, problematic reading-in of Aristotle and arbitrary commitment to predetermined and competitively patriarchal ideas of what counts as a virtue.

Keywords: feminist history, the limits of utopic imagination, curatorial creation, truth, virtue, freedom

Procedia PDF Downloads 52
135 Selfie: Redefining Culture of Narcissism

Authors: Junali Deka

Abstract:

“Pictures speak more than a thousand words”. It is the power of image which can have multiple meanings the way it is read by the viewers. This research article is an outcome of the extensive study of the phenomenon of‘selfie culture’ and dire need of self-constructed virtual identity among youths. In the recent times, there has been a revolutionary change in the concept of photography in terms of both techniques and applications. The popularity of ‘self-portraits’ mainly depend on the temporal space and time created on social networking sites like Facebook, Instagram. With reference to Stuart’s Hall encoding and decoding process, the article studies the behavior of the users who post photographs online. The photographic messages (Roland Barthes) are interpreted differently by different viewers. The notion of ‘self’, ‘self-love and practice of looking (Marita Sturken) and ways of seeing (John Berger) got new definition and dimensional together. After Oscars Night, show host Ellen DeGeneres’s selfie created the most buzz and hype in the social media. The term was judged the word of 2013, and has earned its place in the dictionary. “In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time an Australian origin. The present study was carried to understand the concept of ‘selfie-bug’ and the phenomenon it has created among youth (especially students) at large in developing a pseudo-image of its own. The topic was relevant and gave a platform to discuss about the cultural, psychological and sociological implications of selfie in the age of digital technology. At the first level, content analysis of the primary and secondary sources including newspapers articles and online resources was carried out followed by a small online survey conducted with the help of questionnaire to find out the student’s view on selfie and its social and psychological effects. The newspapers reports and online resources confirmed that selfie is a new trend in the digital media and it has redefined the notion of beauty and self-love. The Facebook and Instagram are the major platforms used to express one-self and creation of virtual identity. The findings clearly reflected the active participation of female students in comparison to male students. The study of the photographs of few selected respondents revealed the difference of attitude and image building among male and female users. The study underlines some basic questions about the desire of reconstruction of identity among young generation, such as - are they becoming culturally narcissist; responsible factors for cultural, social and moral changes in the society, psychological and technological effects caused by Smartphone as well, culminating into a big question mark whether the selfie is a social signifier of identity construction.

Keywords: Culture, Narcissist, Photographs, Selfie

Procedia PDF Downloads 377
134 Digital Environment as a Factor of the City's Competitiveness in Attracting Tourists: The Case of Yekaterinburg

Authors: Alexander S. Burnasov, Anatoly V. Stepanov, Maria Y. Ilyushkina

Abstract:

In the conditions of transition to the digital economy, the digital environment of the city becomes one of the key factors of its tourism attractiveness. Modern digital environment makes travelling more accessible, improves the quality of travel services and the attractiveness of many tourist destinations. The digitalization of the industry allows to use resources more efficiently, to simplify business processes, to minimize risks, and to improve travel safety. The city promotion as a tourist destination in the foreign market becomes decisive in the digital environment. Information technologies are extremely important for the functioning of not only any tourist enterprise but also the city as a whole. In addition to solving traditional problems, it is also possible to implement some innovations from the tourism industry, such as the availability of city services in international systems of booking tickets and booking rooms in hotels, the possibility of early booking of theater and museum tickets, the possibility of non-cash payment by cards of international payment systems, Internet access in the urban environment for travelers. The availability of the city's digital services makes it possible to reduce ordering costs, contributes to the optimal selection of tourist products that meet the requirements of the tourist, provides increased transparency of transactions. The users can compare prices, features, services, and reviews of the travel service. The ability to share impressions with friends thousands of miles away directly affects the image of the city. It is possible to promote the image of the city in the digital environment not only through world-scale events (such as World Cup 2018, international summits, etc.) but also through the creation and management of services in the digital environment aimed at supporting tourism services, which will help to improve the positioning of the city in the global tourism market.

Keywords: competitiveness, digital environment, travelling, Yekaterinburg

Procedia PDF Downloads 110
133 Electric Field-Induced Deformation of Particle-Laden Drops and Structuring of Surface Particles

Authors: Alexander Mikkelsen, Khobaib Khobaib, Zbigniew Rozynek

Abstract:

Drops covered by particles have found important uses in various fields, ranging from stabilization of emulsions to production of new advanced materials. Particles at drop interfaces can be interlocked to form solid capsules with properties tailored for a myriad of applications. Despite the huge potential of particle-laden drops and capsules, the knowledge of their deformation and stability are limited. In this regard, we contribute with experimental studies on the deformation and manipulation of silicone oil drops covered with micrometer-sized particles subjected to electric fields. A mixture of silicone oil and particles were immersed in castor oil using a mechanical pipette, forming millimeter sized drops. The particles moved and adsorbed at the drop interfaces by sedimentation, and were structured at the interface by electric field-induced electrohydrodynamic flows. When applying a direct current electric field, free charges accumulated at the drop interfaces, yielding electric stress that deformed the drops. In our experiments, we investigated how particle properties affected drop deformation, break-up, and particle structuring. We found that by increasing the size of weakly-conductive clay particles, the drop shape can go from compressed to stretched out in the direction of the electric field. Increasing the particle size and electrical properties were also found to weaken electrohydrodynamic flows, induce break-up of drops at weaker electric field strengths and structure particles in chains. These particle parameters determine the dipolar force between the interfacial particles, which can yield particle chaining. We conclude that the balance between particle chaining and electrohydrodynamic flows governs the observed drop mechanics.

Keywords: drop deformation, electric field induced stress, electrohydrodynamic flows, particle structuring at drop interfaces

Procedia PDF Downloads 175
132 Assessment of the Electrical, Mechanical, and Thermal Nociceptive Thresholds for Stimulation and Pain Measurements at the Bovine Hind Limb

Authors: Samaneh Yavari, Christiane Pferrer, Elisabeth Engelke, Alexander Starke, Juergen Rehage

Abstract:

Background: Three nociceptive thresholds of thermal, electrical, and mechanical thresholds commonly use to evaluate the local anesthesia in many species, for instance, cow, horse, cat, dog, rabbit, and so on. Due to the lack of investigations to evaluate and/or validate such those nociceptive thresholds, our plan was the comparison of two-foot local anesthesia methods of Intravenous Regional Anesthesia (IVRA) and our modified four-point Nerve Block Anesthesia (NBA). Materials and Methods: Eight healthy nonpregnant nondairy Holstein Frisian cows in a cross-over study design were selected for this study. All cows divided into two different groups to receive two local anesthesia techniques of IVRA and our modified four-point NBA. Three thermal, electrical, and mechanical force and pinpricks were applied to evaluate the quality of local anesthesia methods before and after local anesthesia application. Results: The statistical evaluation demonstrated that our four-point NBA has a qualification to select as a standard foot local anesthesia. However, the recorded results of our study revealed no significant difference between two groups of local anesthesia techniques of IVRA and modified four-point NBA related to quality and duration of anesthesia stimulated by electrical, mechanical and thermal nociceptive stimuli. Conclusion and discussion: All three nociceptive threshold stimuli of electrical, mechanical and heat nociceptive thresholds can be applied to measure and evaluate the efficacy of foot local anesthesia of dairy cows. However, our study revealed no superiority of those three nociceptive methods to evaluate the duration and quality of bovine foot local anesthesia methods. Veterinarians to investigate the duration and quality of their selected anesthesia method can use any of those heat, mechanical, and electrical methods.

Keywords: mechanical, thermal, electrical threshold, IVRA, NBA, hind limb, dairy cow

Procedia PDF Downloads 219
131 Liquid-Liquid Plug Flow Characteristics in Microchannel with T-Junction

Authors: Anna Yagodnitsyna, Alexander Kovalev, Artur Bilsky

Abstract:

The efficiency of certain technological processes in two-phase microfluidics such as emulsion production, nanomaterial synthesis, nitration, extraction processes etc. depends on two-phase flow regimes in microchannels. For practical application in chemistry and biochemistry it is very important to predict the expected flow pattern for a large variety of fluids and channel geometries. In the case of immiscible liquids, the plug flow is a typical and optimal regime for chemical reactions and needs to be predicted by empirical data or correlations. In this work flow patterns of immiscible liquid-liquid flow in a rectangular microchannel with T-junction are investigated. Three liquid-liquid flow systems are considered, viz. kerosene – water, paraffin oil – water and castor oil – paraffin oil. Different flow patterns such as parallel flow, slug flow, plug flow, dispersed (droplet) flow, and rivulet flow are observed for different velocity ratios. New flow pattern of the parallel flow with steady wavy interface (serpentine flow) has been found. It is shown that flow pattern maps based on Weber numbers for different liquid-liquid systems do not match well. Weber number multiplied by Ohnesorge number is proposed as a parameter to generalize flow maps. Flow maps based on this parameter are superposed well for all liquid-liquid systems of this work and other experiments. Plug length and velocity are measured for the plug flow regime. When dispersed liquid wets channel walls plug length cannot be predicted by known empirical correlations. By means of particle tracking velocimetry technique instantaneous velocity fields in a plug flow regime were measured. Flow circulation inside plug was calculated using velocity data that can be useful for mass flux prediction in chemical reactions.

Keywords: flow patterns, hydrodynamics, liquid-liquid flow, microchannel

Procedia PDF Downloads 361
130 Nanowire Sensor Based on Novel Impedance Spectroscopy Approach

Authors: Valeriy M. Kondratev, Ekaterina A. Vyacheslavova, Talgat Shugabaev, Alexander S. Gudovskikh, Alexey D. Bolshakov

Abstract:

Modern sensorics imposes strict requirements on the biosensors characteristics, especially technological feasibility, and selectivity. There is a growing interest in the analysis of human health biological markers, which indirectly testifying the pathological processes in the body. Such markers are acids and alkalis produced by the human, in particular - ammonia and hydrochloric acid, which are found in human sweat, blood, and urine, as well as in gastric juice. Biosensors based on modern nanomaterials, especially low dimensional, can be used for this markers detection. Most classical adsorption sensors based on metal and silicon oxides are considered non-selective, because they identically change their electrical resistance (or impedance) under the action of adsorption of different target analytes. This work demonstrates a feasible frequency-resistive method of electrical impedance spectroscopy data analysis. The approach allows to obtain of selectivity in adsorption sensors of a resistive type. The method potential is demonstrated with analyzis of impedance spectra of silicon nanowires in the presence of NH3 and HCl vapors with concentrations of about 125 mmol/L (2 ppm) and water vapor. We demonstrate the possibility of unambiguous distinction of the sensory signal from NH3 and HCl adsorption. Moreover, the method is found applicable for analysis of the composition of ammonia and hydrochloric acid vapors mixture without water cross-sensitivity. Presented silicon sensor can be used to find diseases of the gastrointestinal tract by the qualitative and quantitative detection of ammonia and hydrochloric acid content in biological samples. The method of data analysis can be directly translated to other nanomaterials to analyze their applicability in the field of biosensory.

Keywords: electrical impedance spectroscopy, spectroscopy data analysis, selective adsorption sensor, nanotechnology

Procedia PDF Downloads 79
129 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 189
128 Probable Future Weapon to Turn down Malnutrition in Women and Adolescent Girls

Authors: Manali Chakraborty

Abstract:

In the developing countries the most prevalent pathological state is malnutrition and under nutrition due to deficiency of essential nutrients. This condition is more seen between the woman population, especially in the adolescent girls. It is causing childhood deaths along with others cognitive, degenerative diseases. Born of low weight babies and stillbirth are also major problems associated with the malnutrition. Along with the increased level of population, people not only should concern about their quantity of food but also for the quality of the food to be healthy. Lethargy, depression caused due to iron deficiency often quoted as normal or unimportant issues. Children of malnourished women are more likely associated with cognitive impairment, immune dysfunction leading to a higher risk of being attacked by diseases. Malnourishment also affects the productivity of women. Low social status, lack of proper nutritional education is an important cause of this nutrient deficiency among women. Iron deficiency and anemia are mostly famous nutritional deficiencies among women worldwide. Mostly women from below poverty lined area are anemic due to less consumption of iron-rich foods or having foods that might inhibit the iron absorption. Growing females like adolescents or lactating females need more iron supply. Less supplement causes iron deficiency. Though malaria also might cause anemia, it is more likely endemic one in some specific areas. Folate deficiency in females also may cause neurological defects in the infants. Other Vitamin B, A deficiencies along with low iodine level is also noted in malnourished women. According to tradition, still in some areas in developing countries females have their food at the end. According to some survey and collected data, these females are often into the risk zone of being malnourished. Regularly they have the very lesser amount, or food sometimes may start to lose its nutrients. Women are the one who maintains the responsibility to cook foods in the home. Lack of proper nutritional education and proper food preparation not only make those foods lose the nutrients leading to impairment in proper nutrient intake of the family members but also impairing own nutritional status. Formulation and development of food products from iron or other nutrient affluent sources viz., traditional herbs can be helpful in the prevention of such malnutrition condition in females. Keeping low cost and smooth maintenance of the food product development from the natural sources like pulses, cereals or other vegetation also can be beneficial to sustain socio-economic condition. Consumption of such kind of foodstuff is much healthier rather than taking continuous supplements like capsules. Utilization of proper scientific and cost-effective techniques for this food product development and their distribution among rural women population might be an enormous initiative.

Keywords: anemia, food supplements, malnutrition, rural places, women population

Procedia PDF Downloads 95
127 Vibro-Acoustic Modulation for Crack Detection in Windmill Blades

Authors: Abdullah Alnutayfat, Alexander Sutin

Abstract:

One of the most important types of renewable energy resources is wind energy which can be produced by wind turbines. The blades of the wind turbine are exposed to the pressure of the harsh environment, which causes a significant issue for the wind power industry in terms of the maintenance cost and failure of blades. One of the reliable methods for blade inspection is the vibroacoustic structural health monitoring (SHM) method which examines information obtained from the structural vibrations of the blade. However, all vibroacoustic SHM techniques are based on comparing the structural vibration of intact and damaged structures, which places a practical limit on their use. Methods for nonlinear vibroacoustic SHM are more sensitive to damage and cracking and do not need to be compared to data from the intact structure. This paper presents the Vibro-Acoustic Modulation (VAM) method based on the modulation of high-frequency (probe wave) by low-frequency loads (pump wave) produced by the blade rotation. The blade rotation alternates bending stress due to gravity, leading to crack size variations and variations in the blade resonance frequency. This method can be used with the classical SHM vibration method in which the blade is excited by piezoceramic actuator patches bonded to the blade and receives the vibration response from another piezoceramic sensor. The VAM modification of this method analyzes the spectra of the detected signal and their sideband components. We suggest the VAM model as the simple mechanical oscillator, where the parameters of the oscillator (resonance frequency and damping) are varied due to low-frequency blade rotation. This model uses the blade vibration parameters and crack influence on the blade resonance properties from previous research papers to predict the modulation index (MI).

Keywords: wind turbine blades, damaged detection, vibro-acoustic structural health monitoring, vibro-acoustic modulation

Procedia PDF Downloads 55