Search results for: long-term financial performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15151

Search results for: long-term financial performance

211 Nature as a Human Health Asset: An Extensive Review

Authors: C. Sancho Salvatierra, J. M. Martinez Nieto, R. García Gonzalez-Gordon, M. I. Martinez Bellido

Abstract:

Introduction: Nature could act as an asset for human health protecting against possible diseases and promoting the state of both physical and mental health. Goals: This paper aims to determine which natural elements present evidence that show positive influence on human health, on which particular aspects and how. It also aims to determine the best biomarkers to measure such influence. Method: A systematic literature review was carried out. First, a general free text search was performed in databases, such as Scopus, PubMed or PsychInfo. Secondly, a specific search was performed combining keywords in order of increasing complexity. Also the Snowballing technique was used and it was consulted in the CSIC’s (The Spanish National Research Council). Databases: Of the 130 articles obtained and reviewed, 80 referred to natural elements that influenced health. These 80 articles were classified and tabulated according to the nature elements found, the health aspects studied, the health measurement parameters used and the measurement techniques used. In this classification the results of the studies were codified according to whether they were positive, negative or neutral both for the elements of nature and for the aspects of health studied. Finally, the results of the 80 selected studies were summarized and categorized according to the elements of nature that showed the greatest positive influence on health and the biomarkers that had shown greater reliability to measure said influence. Results: Of the 80 articles studied, 24 (30.0%) were reviews and 56 (70.0%) were original research articles. Among the 24 reviews, 18 (75%) found positive results of natural elements on health, and 6 (25%) both positive and negative effects. Of the 56 original articles, 47 (83.9%) showed positive results, 3 (5.4%) both positive and negative, 4 (7.1%) negative effects, and 2 (3.6%) found no effects. The results reflect positive effects of different elements of nature on the following pathologies: diabetes, high blood pressure, stress, attention deficit hyperactivity disorder, psychotic, anxiety and affective disorders. They also show positive effects on the following areas: immune system, social interaction, recovery after illness, mood, decreased aggressiveness, concentrated attention, cognitive performance, restful sleep, vitality and sense of well-being. Among the elements of nature studied, those that show the greatest positive influence on health are forest immersion, natural views, daylight, outdoor physical activity, active transport, vegetation biodiversity, natural sounds and the green residences. As for the biomarkers used that show greater reliability to measure the effects of natural elements are the levels of cortisol (both in blood and saliva), vitamin D levels, serotonin and melatonin, blood pressure, heart rate, muscle tension and skin conductance. Conclusions: Nature is an asset for health, well-being and quality of life. Awareness programs, education and health promotion are needed based on the elements that nature brings us, which in turn generate proactive attitudes in the population towards the protection and conservation of nature. The studies related to this subject in Spain are very scarce. Aknowledgements. This study has been promoted and partially financed by the Environmental Foundation Jaime González-Gordon.

Keywords: health, green areas, nature, well-being

Procedia PDF Downloads 277
210 Membrane Technologies for Obtaining Bioactive Fractions from Blood Main Protein: An Exploratory Study for Industrial Application

Authors: Fatima Arrutia, Francisco Amador Riera

Abstract:

The meat industry generates large volumes of blood as a result of meat processing. Several industrial procedures have been implemented in order to treat this by-product, but are focused on the production of low-value products, and in many cases, blood is simply discarded as waste. Besides, in addition to economic interests, there is an environmental concern due to bloodborne pathogens and other chemical contaminants found in blood. Consequently, there is a dire need to find extensive uses for blood that can be both applicable to industrial scale and able to yield high value-added products. Blood has been recognized as an important source of protein. The main blood serum protein in mammals is serum albumin. One of the top trends in food market is functional foods. Among them, bioactive peptides can be obtained from protein sources by microbiological fermentation or enzymatic and chemical hydrolysis. Bioactive peptides are short amino acid sequences that can have a positive impact on health when administered. The main drawback for bioactive peptide production is the high cost of the isolation, purification and characterization techniques (such as chromatography and mass spectrometry) that make unaffordable the scale-up. On the other hand, membrane technologies are very suitable to apply to the industry because they offer a very easy scale-up and are low-cost technologies, compared to other traditional separation methods. In this work, the possibility of obtaining bioactive peptide fractions from serum albumin by means of a simple procedure of only 2 steps (hydrolysis and membrane filtration) was evaluated, as an exploratory study for possible industrial application. The methodology used in this work was, firstly, a tryptic hydrolysis of serum albumin in order to release the peptides from the protein. The protein was previously subjected to a thermal treatment in order to enhance the enzyme cleavage and thus the peptide yield. Then, the obtained hydrolysate was filtered through a nanofiltration/ultrafiltration flat rig at three different pH values with two different membrane materials, so as to compare membrane performance. The corresponding permeates were analyzed by liquid chromatography-tandem mass spectrometry technology in order to obtain the peptide sequences present in each permeate. Finally, different concentrations of every permeate were evaluated for their in vitro antihypertensive and antioxidant activities though ACE-inhibition and DPPH radical scavenging tests. The hydrolysis process with the previous thermal treatment allowed achieving a degree of hydrolysis of the 49.66% of the maximum possible. It was found that peptides were best transmitted to the permeate stream at pH values that corresponded to their isoelectric points. Best selectivity between peptide groups was achieved at basic pH values. Differences in peptide content were found between membranes and also between pH values for the same membrane. The antioxidant activity of all permeates was high compared with the control only for the highest dose. However, antihypertensive activity was best for intermediate concentrations, rather than higher or lower doses. Therefore, although differences between them, all permeates were promising regarding antihypertensive and antioxidant properties.

Keywords: bioactive peptides, bovine serum albumin, hydrolysis, membrane filtration

Procedia PDF Downloads 200
209 Achieving Flow at Work: An Experience Sampling Study to Comprehend How Cognitive Task Characteristics and Work Environments Predict Flow Experiences

Authors: Jonas De Kerf, Rein De Cooman, Sara De Gieter

Abstract:

For many decades, scholars have aimed to understand how work can become more meaningful by maximizing both potential and enhancing feelings of satisfaction. One of the largest contributions towards such positive psychology was made with the introduction of the concept of ‘flow,’ which refers to a condition in which people feel intense engagement and effortless action. Since then, valuable research on work-related flow has indicated that this state of mind is related to positive outcomes for both organizations (e.g., social, supportive climates) and workers (e.g., job satisfaction). Yet, scholars still do not fully comprehend how such deep involvement at work is obtained, given the notion that flow is considered a short-term, complex, and dynamic experience. Most research neglects that people who experience flow ought to be optimally challenged so that intense concentration is required. Because attention is at the core of this enjoyable state of mind, this study aims to comprehend how elements that affect workers’ cognitive functioning impact flow at work. Research on cognitive performance suggests that working on mentally demanding tasks (e.g., information processing tasks) requires workers to concentrate deeply, as a result leading to flow experiences. Based on social facilitation theory, working on such tasks in an isolated environment eases concentration. Prior research has indicated that working at home (instead of working at the office) or in a closed office (rather than in an open-plan office) impacts employees’ overall functioning in terms of concentration and productivity. Consequently, we advance such knowledge and propose an interaction by combining cognitive task characteristics and work environments among part-time teleworkers. Hence, we not only aim to shed light on the relation between cognitive tasks and flow but also provide empirical evidence that workers performing such tasks achieve the highest states of flow while working either at home or in closed offices. In July 2022, an experience-sampling study will be conducted that uses a semi-random signal schedule to understand how task and environment predictors together impact part-time teleworkers’ flow. More precisely, about 150 knowledge workers will fill in multiple surveys a day for two consecutive workweeks to report their flow experiences, cognitive tasks, and work environments. Preliminary results from a pilot study indicate that on a between level, tasks high in information processing go along with high self-reported fluent productivity (i.e., making progress). As expected, evidence was found for higher fluency in productivity for workers performing information processing tasks both at home and in a closed office, compared to those performing the same tasks at the office or in open-plan offices. This study expands the current knowledge on work-related flow by looking at a task and environmental predictors that enable workers to obtain such a peak state. While doing so, our findings suggest that practitioners should strive for ideal alignments between tasks and work locations to work with both deep involvement and gratification.

Keywords: cognitive work, office lay-out, work location, work-related flow

Procedia PDF Downloads 101
208 Buoyant Gas Dispersion in a Small Fuel Cell Enclosure: A Comparison Study Using Plain and Pressed Louvre Vent Passive Ventilation Schemes

Authors: T. Ghatauray, J. Ingram, P. Holborn

Abstract:

The transition from a ‘carbon rich’ fossil fuel dependent to a ‘sustainable’ and ‘renewable’ hydrogen based society will see the deployment of hydrogen fuel cells (HFC) in transport applications and in the generation of heat and power for buildings, as part of a decentralised power network. Many deployments will be low power HFCs for domestic combined heat and power (CHP) and commercial ‘transportable’ HFCs for environmental situations, such as lighting and telephone towers. For broad commercialisation of small fuel cells to be achieved there needs to be significant confidence in their safety in both domestic and environmental applications. Low power HFCs are housed in protective steel enclosures. Standard enclosures have plain rectangular ventilation openings intended for thermal management of electronics and not the dispersion of a buoyant gas. Degradation of the HFC or supply pipework in use could lead to a low-level leak and a build-up of hydrogen gas in the enclosure. Hydrogen’s wide flammable range (4-75%) is a significant safety concern, with ineffective enclosure ventilation having the potential to cause flammable mixtures to develop with the risk of explosion. Mechanical ventilation is effective at managing enclosure hydrogen concentrations, but drains HFC power and is vulnerable to failure. This is undesirable in low power and remote installations and reliable passive ventilation systems are preferred. Passive ventilation depends upon buoyancy driven flow, with the size, shape and position of ventilation openings critical for producing predictable flows and maintaining low buoyant gas concentrations. With environmentally sited enclosures, ventilation openings with pressed horizontal and angled louvres are preferred to protect the HFC and electronics inside. There is an economic cost to adding louvres, but also a safety concern. A question arises over whether the use of pressed louvre vents impairs enclosure passive ventilation performance, when compared to same opening area plain vents. Comparison small enclosure (0.144m³) tests of same opening area pressed louvre and plain vents were undertaken. A displacement ventilation arrangement was incorporated into the enclosure with opposing upper and lower ventilation openings. A range of vent areas were tested. Helium (used as a safe analogue for hydrogen) was released from a 4mm nozzle at the base of the enclosure to simulate a hydrogen leak at leak rates from 1 to 10 lpm. Helium sensors were used to record concentrations at eight heights in the enclosure. The enclosure was otherwise empty. These tests determined that the use of pressed and angled louvre ventilation openings on the enclosure impaired the passive ventilation flow and increased helium concentrations in the enclosure. High-level stratified buoyant gas layers were also found to be deeper than with plain vent openings and were within the flammable range. The presence of gas within the flammable range is of concern, particularly as the addition of the fuel cell and electronics in the enclosure would further reduce the available volume and increase concentrations. The opening area of louvre vents would need to be greater than equivalent plain vents to achieve comparable ventilation flows or alternative schemes would need to be considered.

Keywords: enclosure, fuel cell, helium, hydrogen safety, louvre vent, passive ventilation

Procedia PDF Downloads 274
207 Luminescent Properties of Plastic Scintillator with Large Area Photonic Crystal Prepared by a Combination of Nanoimprint Lithography and Atomic Layer Deposition

Authors: Jinlu Ruan, Liang Chen, Bo Liu, Xiaoping Ouyang, Zhichao Zhu, Zhongbing Zhang, Shiyi He, Mengxuan Xu

Abstract:

Plastic scintillators play an important role in the measurement of a mixed neutron/gamma pulsed radiation, neutron radiography and pulse shape discrimination technology. In some research, these luminescent properties are necessary that photons produced by the interactions between a plastic scintillator and radiations can be detected as much as possible by the photoelectric detectors and more photons can be emitted from the scintillators along a specific direction where detectors are located. Unfortunately, a majority of these photons produced are trapped in the plastic scintillators due to the total internal reflection (TIR), because there is a significant light-trapping effect when the incident angle of internal scintillation light is larger than the critical angle. Some of these photons trapped in the scintillator may be absorbed by the scintillator itself and the others are emitted from the edges of the scintillator. This makes the light extraction of plastic scintillators very low. Moreover, only a small portion of the photons emitted from the scintillator easily can be detected by detectors effectively, because the distribution of the emission directions of this portion of photons exhibits approximate Lambertian angular profile following a cosine emission law. Therefore, enhancing the light extraction efficiency and adjusting the emission angular profile become the keys for improving the number of photons detected by the detectors. In recent years, photonic crystal structures have been covered on inorganic scintillators to enhance the light extraction efficiency and adjust the angular profile of scintillation light successfully. However, that, preparation methods of photonic crystals will deteriorate performance of plastic scintillators and even destroy the plastic scintillators, makes the investigation on preparation methods of photonic crystals for plastic scintillators and luminescent properties of plastic scintillators with photonic crystal structures inadequate. Although we have successfully made photonic crystal structures covered on the surface of plastic scintillators by a modified self-assembly technique and achieved a great enhance of light extraction efficiency without evident angular-dependence for the angular profile of scintillation light, the preparation of photonic crystal structures with large area (the diameter is larger than 6cm) and perfect periodic structure is still difficult. In this paper, large area photonic crystals on the surface of scintillators were prepared by nanoimprint lithography firstly, and then a conformal layer with material of high refractive index on the surface of photonic crystal by atomic layer deposition technique in order to enhance the stability of photonic crystal structures and increase the number of leaky modes for improving the light extraction efficiency. The luminescent properties of the plastic scintillator with photonic crystals prepared by the mentioned method are compared with those of the plastic scintillator without photonic crystal. The results indicate that the number of photons detected by detectors is increased by the enhanced light extraction efficiency and the angular profile of scintillation light exhibits evident angular-dependence for the scintillator with photonic crystals. The mentioned preparation of photonic crystals is beneficial to scintillation detection applications and lays an important technique foundation for the plastic scintillators to meet special requirements under different application backgrounds.

Keywords: angular profile, atomic layer deposition, light extraction efficiency, plastic scintillator, photonic crystal

Procedia PDF Downloads 200
206 Tele-Rehabilitation for Multiple Sclerosis: A Case Study

Authors: Sharon Harel, Rachel Kizony, Yoram Feldman, Gabi Zeilig, Mordechai Shani

Abstract:

Multiple Sclerosis (MS) is a neurological disease that may cause restriction in participation in daily activities of young adults. Main symptoms include fatigue, weakness and cognitive decline. The appearance of symptoms, their severity and deterioration rate, change between patients. The challenge of health services is to provide long-term rehabilitation services to people with MS. The objective of this presentation is to describe a course of tele-rehabilitation service of a woman with MS. Methods; R is a 48 years-old woman, diagnosed with MS when she was 22. She started to suffer from weakness of her non-dominant left upper extremity about ten years after the diagnosis. She was referred to the tele-rehabilitation service by her rehabilitation team, 16 years after diagnosis. Her goals were to improve ability to use her affected upper extremity in daily activities. On admission her score in the Mini-Mental State Exam was 30/30. Her Fugl-Meyer Assessment (FMA) score of the left upper extremity was 48/60, indicating mild weakness and she had a limitation of her shoulder abduction (90 degrees). In addition, she reported little use of her arm in daily activities as shown in her responses to the Motor Activity Log (MAL) that were equal to 1.25/5 in amount and 1.37 in quality of use. R. received two 30 minutes on-line sessions per week in the tele-rehabilitation service, with the CogniMotion system. These were complemented by self-practice with the system. The CogniMotion system provides a hybrid (synchronous-asynchronous), the home-based tele-rehabilitation program to improve the motor, cognitive and functional status of people with neurological deficits. The system consists of a computer, large monitor, and the Microsoft’s Kinect 3D sensor. This equipment is located in the client’s home and connected to a clinician’s computer setup in a remote clinic via WiFi. The client sits in front of the monitor and uses his body movements to interact with games and tasks presented on the monitor. The system provides feedback in the form of ‘knowledge of results’ (e.g., the success of a game) and ‘knowledge of performance’ (e.g., alerts for compensatory movements) to enhance motor learning. The games and tasks were adapted for R. motor abilities and level of difficulty was gradually increased according to her abilities. The results of her second assessment (after 35 on-line sessions) showed improvement in her FMA score to 52 and shoulder abduction to 140 degrees. Moreover, her responses to the MAL indicated an increased amount (2.4) and quality (2.2) of use of her left upper extremity in daily activities. She reported high level of enjoyment from the treatments (5/5), specifically the combination of cognitive challenges while moving her body. In addition, she found the system easy to use as reflected by her responses to the System Usability Scale (85/100). To-date, R. continues to receive treatments in the tele-rehabilitation service. To conclude, this case report shows the potential of using tele-rehabilitation for people with MS to provide strategies to enhance the use of the upper extremity in daily activities as well as for maintaining motor function.

Keywords: motor function, multiple-sclerosis, tele-rehabilitation, daily activities

Procedia PDF Downloads 180
205 Media, Myth and Hero: Sacred Political Narrative in Semiotic and Anthropological Analysis

Authors: Guilherme Oliveira

Abstract:

The assimilation of images and their potential symbolism into lived experiences is inherent. It is through this exercise of recognition via imagistic records that the questioning of the origins of a constant narrative stimulated by the media arises. The construction of the "Man" archetype and the reflections of active masculine imagery in the 21st century, when conveyed through media channels, could potentially have detrimental effects. Addressing this systematic behavioral chronology of virile cisgender, permeated imagistically through these means, involves exploring potential resolutions. Thus, an investigation process is initiated into the potential representation of the 'hero' in this media emulation through idols contextualized in the political sphere, with the purpose of elucidating the processes of simulation and emulation of narratives based on mythical, historical, and sacred accounts. In this process of sharing, the narratives contained in the imagistic structuring offered by information dissemination channels seek validation through a process of public acceptance. To achieve this consensus, a visual set adorned with mythological and sacred symbolisms adapted to the intended environment is promoted, thus utilizing sociocultural characteristics in favor of political marketing. Visual recognition, therefore, becomes a direct reflection of a cultural heritage acquired through lived human experience, stimulated by continuous representations throughout history. Echoes of imagery and narratives undergo a constant process of resignification of their concepts, sharpened by their premises, and adapted to the environment in which they seek to establish themselves. Political figures analyzed in this article employ the practice of taking possession of symbolisms, mythological stories, and heroisms and adapt their visual construction through a continuous praxis of emulation. Thus, they utilize iconic mythological narratives to gain credibility through belief. Utilizing iconic mythological narratives for credibility through belief, the idol becomes the very act of release of trauma, offering believers liberation from preconceived concepts and allowing for the attribution of new meanings. To dissolve this issue and highlight the subjectivities within the intention of the image, a linguistic, semiotic, and anthropological methodology is created. Linguistics uses expressions like 'Blaming the Image' to create a mechanism of expressive action in questioning why to blame a construction or visual composition and thus seek answers in the first act. Semiotics and anthropology develop an imagistic atlas of graphic analysis, seeking to make connections, comparisons, and relations between modern and sacred/mystical narratives, emphasizing the different subjective layers of embedded symbolism. Thus, it constitutes a performative act of disarming the image. It creates a disenchantment of the superficial gaze under the constant reproduction of visual content stimulated by virtual networks, enabling a discussion about the acceptance of caricatures characterized by past fables.

Keywords: image, heroic narrative, media heroism, virile politics, political, myth, sacred performance, visual mythmaking, characterization dynamics

Procedia PDF Downloads 50
204 Kinetic Rate Comparison of Methane Catalytic Combustion of Palladium Catalysts Impregnated onto ɤ-Alumina and Bio-Char

Authors: Noor S. Nasri, Eric C. A. Tatt, Usman D. Hamza, Jibril Mohammed, Husna M. Zain

Abstract:

Climate change has becoming a global environmental issue that may trigger irreversible changes in the environment with catastrophic consequences for human, animals and plants on our planet. Methane, carbon dioxide and nitrous oxide are the greenhouse gases (GHG) and as the main factor that significantly contributes to the global warming. Mainly carbon dioxide be produced and released to atmosphere by thermal industrial and power generation sectors. Methane is dominant component of natural gas releases significant of thermal heat, and the gaseous pollutants when homogeneous thermal combustion takes place at high temperature. Heterogeneous catalytic Combustion (HCC) principle is promising technologies towards environmental friendly energy production should be developed to ensure higher yields with lower pollutants gaseous emissions and perform complete combustion oxidation at moderate temperature condition as comparing to homogeneous high thermal combustion. Hence the principle has become a very interesting alternative total oxidation for the treatment of pollutants gaseous emission especially NOX product formation. Noble metals are dispersed on a support-porous HCC such as γ- Al2O3, TiO2 and ThO2 to increase thermal stability of catalyst and to increase to effectiveness of catalytic combustion. Support-porous HCC material to be selected based on factors of the surface area, porosity, thermal stability, thermal conductivity, reactivity with reactants or products, chemical stability, catalytic activity, and catalyst life. γ- Al2O3 with high catalytic activity and can last longer life of catalyst, is commonly used as the support for Pd catalyst at low temperatures. Sustainable and renewable support-material of bio-mass char was derived from agro-industrial waste material and used to compare with those the conventional support-porous material. The abundant of biomass wastes generated in palm oil industries is one potential source to convert the wastes into sustainable material as replacement of support material for catalysts. Objective of this study was to compare the kinetic rate of reaction the combustion of methane on Palladium (Pd) based catalyst with Al2O3 support and bio-char (Bc) support derived from shell kernel. The 2wt% Pd was prepared using incipient wetness impregnation method and the HCC performance was accomplished using tubular quartz reactor with gas mixture ratio of 3% methane and 97% air. Material characterization was determined using TGA, SEM, and BET surface area. The methane porous-HCC conversion was carried out by online gas analyzer connected to the reactor that performed porous-HCC. BET surface area for prepared 2 wt% Pd/Bc is smaller than prepared 2wt% Pd/ Al2O3 due to its low porosity between particles. The order of catalyst activity based on kinetic rate on reaction of catalysts in low temperature is prepared 2wt% Pd/Bc > calcined 2wt% Pd/ Al2O3 > prepared 2wt% Pd/ Al2O3 > calcined 2wt% Pd/Bc. Hence the usage of agro-industrial bio-mass waste material can enhance the sustainability principle.

Keywords: catalytic-combustion, environmental, support-bio-char material, sustainable and renewable material

Procedia PDF Downloads 389
203 Increased Stability of Rubber-Modified Asphalt Mixtures to Swelling, Expansion and Rebound Effect during Post-Compaction

Authors: Fernando Martinez Soto, Gaetano Di Mino

Abstract:

The application of rubber into bituminous mixtures requires attention and care during mixing and compaction. Rubber modifies the properties because it reacts in the internal structure of bitumen at high temperatures changing the performance of the mixture (interaction process of solvents with binder-rubber aggregate). The main change is the increasing of the viscosity and elasticity of the binder due to the larger sizes of the rubber particles by dry process but, this positive effect is counteracted by short mixing times, compared to wet technology, and due to the transport processes, curing time and post-compaction of the mixtures. Therefore, negative effects as swelling of rubber particles, rebounding effect of the specimens and thermal changes by different expansion of the structure inside the mixtures, can change the mechanical properties of the rubberized blends. Based on the dry technology, different asphalt-rubber binders using devulcanized or natural rubber (truck and bus tread rubber), have served to demonstrate these effects and how to solve them into two dense-gap graded rubber modified asphalt concrete mixes (RUMAC) to enhance the stability, workability and durability of the compacted samples by Superpave gyratory compactor method. This paper specifies the procedures developed in the Department of Civil Engineering of the University of Palermo during September 2016 to March 2017, for characterizing the post-compaction and mix-stability of the one conventional mixture (hot mix asphalt without rubber) and two gap-graded rubberized asphalt mixes according granulometry for rail sub-ballast layers with nominal size of Ø22.4mm of aggregates according European standard. Thus, the main purpose of this laboratory research is the application of ambient ground rubber from scrap tires processed at conventional temperature (20ºC) inside hot bituminous mixtures (160-220ºC) as a substitute for 1.5%, 2% and 3% by weight of the total aggregates (3.2%, 4.2% and, 6.2% respectively by volumetric part of the limestone aggregates of bulk density equal to 2.81g/cm³) considered, not as a part of the asphalt binder. The reference bituminous mixture was designed with 4% of binder and ± 3% of air voids, manufactured for a conventional bitumen B50/70 at 160ºC-145ºC mix-compaction temperatures to guarantee the workability of the mixes. The proportions of rubber proposed are #60-40% for mixtures with 1.5 to 2% of rubber and, #20-80% for mixture with 3% of rubber (as example, a 60% of Ø0.4-2mm and 40% of Ø2-4mm). The temperature of the asphalt cement is between 160-180 ºC for mixing and 145-160 ºC for compaction, according to the optimal values for viscosity using Brookfield viscometer and 'ring and ball' - penetration tests. These crumb rubber particles act as a rubber-aggregate into the mixture, varying sizes between 0.4mm to 2mm in a first fraction, and 2-4mm as second proportion. Ambient ground rubber with a specific gravity of 1.154g/cm³ is used. The rubber is free of loose fabric, wire, and other contaminants. It was found optimal results in real beams and cylindrical specimens with each HMA mixture reducing the swelling effect. Different factors as temperature, particle sizes of rubber, number of cycles and pressures of compaction that affect the interaction process are explained.

Keywords: crumb-rubber, gyratory compactor, rebounding effect, superpave mix-design, swelling, sub-ballast railway

Procedia PDF Downloads 243
202 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 102
201 Bacteriophages for Sustainable Wastewater Treatment: Application in Black Water Decontamination with an Emphasis to DRDO Biotoilet

Authors: Sonika Sharma, Mohan G. Vairale, Sibnarayan Datta, Soumya Chatterjee, Dharmendra Dubey, Rajesh Prasad, Raghvendra Budhauliya, Bidisha Das, Vijay Veer

Abstract:

Bacteriophages are viruses that parasitize specific bacteria and multiply in metabolising host bacteria. Bacteriophages hunt for a single or a subset of bacterial species, making them potential antibacterial agents. Utilizing the ability of phages to control bacterial populations has several applications from medical to the fields of agriculture, aquaculture and the food industry. However, harnessing phage based techniques in wastewater treatments to improve quality of effluent and sludge release into the environment is a potential area for R&D application. Phage mediated bactericidal effect in any wastewater treatment process has many controlling factors that lead to treatment performance. In laboratory conditions, titer of bacteriophages (coliphages) isolated from effluent water of a specially designed anaerobic digester of human night soil (DRDO Biotoilet) was successfully increased with a modified protocol of the classical double layer agar technique. Enrichment of the same was carried out and efficacy of the phage enriched medium was evaluated at different conditions (specific media, temperature, storage conditions). Growth optimization study was carried out on different media like soybean casein digest medium (Tryptone soya medium), Luria-Bertani medium, phage deca broth medium and MNA medium (Modified nutrient medium). Further, temperature-phage yield relationship was also observed at three different temperatures 27˚C, 37˚C and 44˚C at laboratory condition. Results showed the higher activity of coliphage 27˚C and at 37˚C. Further, addition of divalent ions (10mM MgCl2, 5mM CaCl2) and 5% glycerol resulted in a significant increase in phage titer. Besides this, effect of antibiotics addition like ampicillin and kanamycin at different concentration on plaque formation was analysed and reported that ampicillin at a concentration of 1mg/ml ampicillin stimulates phage infection and results in more number of plaques. Experiments to test viability of phage showed that it can remain active for 6 months at 4˚C in fresh tryptone soya broth supplemented with fresh culture of coliforms (early log phase). The application of bacteriophages (especially coliphages) for treatment of effluent of human faecal matter contaminated effluent water is unique. This environment-friendly treatment system not only reduces the pathogenic coliforms, but also decreases the competition between nuisance bacteria and functionally important microbial populations. Therefore, the phage based cocktail to treat fecal pathogenic bacteria present in black water has many implication in wastewater treatment processes including ‘DRDO Biotoilet’, which is an ecofriendly appropriate and affordable human faecal matter treatment technology for different climates and situations.

Keywords: wastewater, microbes, virus, biotoilet, phage viability

Procedia PDF Downloads 436
200 Motherhood Factors Influencing the Business Growth of Women-Owned Sewing Businesses in Lagos, Nigeria: A Mixed Method Study

Authors: Oyedele Ogundana, Amon Simba, Kostas Galanakis, Lynn Oxborrow

Abstract:

The debate about factors influencing the business growth of women-owned businesses has been a topical issue in business management. Currently, scholars have identified the issues of access to money, market, and management as canvasing factors influencing the business growth of women-owned businesses. However, the influence of motherhood (household/family context) on business growth is inconclusive in the literature; despite that women are more family-oriented than their male counterparts. Therefore, this research study considers the influence of motherhood factor (household/family context) on the business growth of women-owned sewing businesses (WOSBs) in Lagos, Nigeria. The sewing business sector is chosen as the fashion industry (which includes sewing businesses) currently accounts for the second largest number of jobs in Sub-Saharan Africa, following agriculture. Thus, sewing businesses provide a rich ground for contributing to existing scholarly work. Research questions; (1) In what way does the motherhood factor influence the business growth of WOSBs in Lagos? (2) To what extent does the motherhood factor influence the business growth of WOSBs in Lagos? For the method design, a pragmatic approach, a mixed-methods technique and an abductive form of reasoning are adopted. The method design is chosen because it fits, better than other research perspectives, with the research questions posed in this study. For instance, using a positivist approach will not sufficiently answer research question 1, neither will an interpretive approach sufficiently answer research question 2. Therefore, the research method design is divided into 2 phases, and the results from one phase are used to inform the development of the subsequent phases (only phase 1 has been completed at the moment). The first phase uses qualitative data and analytical method to answer research question 1. While the second phase of the research uses quantitative data and analytical method to answer research question 2. For the qualitative phase, 5 WOSBs were purposefully selected and interviewed. The sampling technique is selected as it was not the intention of the researcher to make any statistical inferences, at this phase, rather the purpose was just exploratory. Therefore, the 5 sampled women comprised of 2 unmarried women, 1 married woman with no child, and 2 married women with children. A 40-60 minutes interview was conducted per participants. The interviews were audio-recorded and transcribed. Thereafter, the data were analysed using thematic analysis in order to unearth patterns and relationships. Findings for the first phase of this research reveals that motherhood (household/family context) directly influences (positively/negatively) the performance of WOSBs in Lagos. Apart from a direct influence on WOSBs, motherhood also moderates (positively/negatively) other factors–e.g., access to money, management/human resources and market/opportunities– influencing WOSBs in Lagos. To further strengthen this conclusion, a word frequency query result shows that ‘family,’ ‘husband’ and ‘children’ are among the 10 words used frequently in all the interview transcripts. This first phase contributes to existing studies by showing the various forms by which motherhood influences WOSBs. The second phase (which data are yet to be collected) would reveal the extent to which motherhood influence the business growth of WOSBs in Lagos.

Keywords: women-owned sewing businesses, business growth, motherhood, Lagos

Procedia PDF Downloads 163
199 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 169
198 Flexible Ethylene-Propylene Copolymer Nanofibers Decorated with Ag Nanoparticles as Effective 3D Surface-Enhanced Raman Scattering Substrates

Authors: Yi Li, Rui Lu, Lianjun Wang

Abstract:

With the rapid development of chemical industry, the consumption of volatile organic compounds (VOCs) has increased extensively. In the process of VOCs production and application, plenty of them have been transferred to environment. As a result, it has led to pollution problems not only in soil and ground water but also to human beings. Thus, it is important to develop a sensitive and cost-effective analytical method for trace VOCs detection in environment. Surface-enhanced Raman Spectroscopy (SERS), as one of the most sensitive optical analytical technique with rapid response, pinpoint accuracy and noninvasive detection, has been widely used for ultratrace analysis. Based on the plasmon resonance on the nanoscale metallic surface, SERS technology can even detect single molecule due to abundant nanogaps (i.e. 'hot spots') on the nanosubstrate. In this work, a self-supported flexible silver nitrate (AgNO3)/ethylene-propylene copolymer (EPM) hybrid nanofibers was fabricated by electrospinning. After an in-situ chemical reduction using ice-cold sodium borohydride as reduction agent, numerous silver nanoparticles were formed on the nanofiber surface. By adjusting the reduction time and AgNO3 content, the morphology and dimension of silver nanoparticles could be controlled. According to the principles of solid-phase extraction, the hydrophobic substance is more likely to partition into the hydrophobic EPM membrane in an aqueous environment while water and other polar components are excluded from the analytes. By the enrichment of EPM fibers, the number of hydrophobic molecules located on the 'hot spots' generated from criss-crossed nanofibers is greatly increased, which further enhances SERS signal intensity. The as-prepared Ag/EPM hybrid nanofibers were first employed to detect common SERS probe molecule (p-aminothiophenol) with the detection limit down to 10-12 M, which demonstrated an excellent SERS performance. To further study the application of the fabricated substrate for monitoring hydrophobic substance in water, several typical VOCs, such as benzene, toluene and p-xylene, were selected as model compounds. The results showed that the characteristic peaks of these target analytes in the mixed aqueous solution could be distinguished even at a concentration of 10-6 M after multi-peaks gaussian fitting process, including C-H bending (850 cm-1), C-C ring stretching (1581 cm-1, 1600 cm-1) of benzene, C-H bending (844 cm-1 ,1151 cm-1), C-C ring stretching (1001 cm-1), CH3 bending vibration (1377 cm-1) of toluene, C-H bending (829 cm-1), C-C stretching (1614 cm-1) of p-xylene. The SERS substrate has remarkable advantages which combine the enrichment capacity from EPM and the Raman enhancement of Ag nanoparticles. Meanwhile, the huge specific surface area resulted from electrospinning is benificial to increase the number of adsoption sites and promotes 'hot spots' formation. In summary, this work provides powerful potential in rapid, on-site and accurate detection of trace VOCs using a portable Raman.

Keywords: electrospinning, ethylene-propylene copolymer, silver nanoparticles, SERS, VOCs

Procedia PDF Downloads 160
197 Urban Slum Communities Engage in the Fight Against TB in Karnataka, South India

Authors: N. Rambabu, H. Gururaj, Reynold Washington, Oommen George

Abstract:

Motivation: Under the USAID Strengthening Health Outcomes through Private Sector (SHOPS-TB) initiative, Karnataka Health Promotion Trust (KHPT) with technical support of Abt associates is implementing a TB prevention and care model in Karnataka State, South India. KHPT is the interface agency between the public and private sectors, and providers and the target community facilitating early TB case detection and enhancing treatment compliance through private health care providers (pHCP) engagement in RNTCP. The project coverage is 0.84 million urban poor from 663 slums in 12 districts of Karnataka. Problem Statement: India with the highest burden of global TB (26%) and two million cases annually, accounts for approximately one fifth of the global incidence. WHO estimates 300,000 people die from TB annually in India. India expanded the coverage of Directly Observed Treatment, Short-course chemotherapy (DOTS) to the entire country as early as 2006. However, the performance of RNTCP has not been uniform across states. While the national annual new smear-positive (NSP) case notification rate is 53, it is much lower at 47 in Karnataka. A third of TB patients in India reside in urban slums. Approach: Under SHOPS, KHPT actively engages with communities through key opinion leaders and community structures. Interpersonal communication, by Outreach workers through house-to-house visits and at aggregation points, is the primary method used for communication about TB and its management and to increase demand for sputum examination and DOTS. pHCP are mapped, trained and mentored by KHPT. ORWs also provide patient and family counseling on TB treatment, side effects and adherence, screen close contacts of index patients especially children under 6 years of age and screen co-morbidities including HIV, diabetes and malnutrition and risk factors including alcoholism, tobacco use, occupational hazards making appropriate accompanied or documented referrals. A treatment ‘buddy’ system for the patients involving close friends or family members, ICT-based support, DOTS Prerana (inspiration) groups of TB patients, family members and community, DOTS Mitra (friend) helpline services are also used for care and support services. Results: The intervention educated 39988 slum dwellers, referred 1731 chest symptomatics, tested 1061 patients and initiated 248 patients on anti-TB treatment within three months of intervention through continuous community engagement. Conclusions: The intervention’s potential to increase access to preferred health care providers, reduce patient and health system delays in diagnosis and initiation of treatment, improve health seeking behaviour and enhance compliance of pHCPs to standard treatment protocols is being monitored. Initial results are promising.

Keywords: DOTS, KHPT, health outcomes, public and private sector

Procedia PDF Downloads 316
196 University Climate and Psychological Adjustment: African American Women’s Experiences at Predominantly White Institutions in the United States

Authors: Faheemah N. Mustafaa, Tamarie Macon, Tabbye Chavous

Abstract:

A major concern of university leaders worldwide is how to create environments where students from diverse racial/ethnic, national, and cultural backgrounds can thrive. Over the past decade or so in the United States, African American women have done exceedingly well in terms of college enrollment, academic performance, and completion. However, the relative academic successes of African American women in higher education has in some ways overshadowed social challenges many Black women continue to encounter on college campuses in the United States. Within predominantly White institutions (PWIs) in particular, there is consistent evidence that many Black students experience racially hostile climates. However, research studies on racial climates within PWIs have mostly focused on cross-sectional comparisons of minority and majority group experiences, and few studies have examined campus racial climate in relation to short- and longer-term well-being. One longitudinal study reported that African American women’s psychological well-being was positively related to their comfort in cross-racial interactions (a concept closely related to campus climate). Thus, our primary research question was: Do African American women’s perceptions of campus climate (tension and positive association) during their freshman year predict their reports of psychological distress and well-being (self-acceptance) during their sophomore year? Participants were part of a longitudinal survey examining African American college students’ academic identity development, particularly in Science, Technology, Engineering, and Mathematics (STEM) fields. The final subsample included 134 self-identified African American/Black women enrolled in PWIs. Accounting for background characteristics (mother’s education, family income, interracial contact, and prior levels of outcomes), we employed hierarchical regression to examine relationships between campus racial climate during freshman year and psychological adjustment one year later. Both regression models significantly predicted African American women’s psychological outcomes (for distress, F(7,91)= 4.34, p < .001; and for self-acceptance, F(7,90)= 4.92, p < .001). Although none of the controls were significant predictors, perceptions of racial tension on campus were associated with both distress and self-acceptance. More perceptions of tension were related to African American women’s greater psychological distress the following year (B= 0.22, p= .01). Additionally, racial tension predicted later self-acceptance in the expected direction: Higher first-year reports of racial tension were related to less positive attitudes toward the self during the sophomore year (B= -0.16, p= .04). However, perceptions that it was normative for Black and White students to socialize on campus (or positive association scores) were unrelated to psychological distress or self-acceptance. Findings highlight the relevance of examining multiple facets of campus racial climate in relation to psychological adjustment, with possible emphasis on the import of racial tension on African American women’s psychological adjustment. Results suggest that negative dimensions of campus racial climate may have lingering effects on psychological well-being, over and above more positive aspects of climate. Thus, programs targeted toward improving student relations on campus should consider addressing cross-racial tensions.

Keywords: higher education, psychological adjustment, university climate, university students

Procedia PDF Downloads 385
195 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden

Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson

Abstract:

Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.

Keywords: ehealth, elderly, evaluation, intervention, multi-cultural

Procedia PDF Downloads 324
194 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies

Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon

Abstract:

In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learning

Keywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps

Procedia PDF Downloads 126
193 Application of Self-Efficacy Theory in Counseling Deaf and Hard of Hearing Students

Authors: Nancy A. Delich, Stephen D. Roberts

Abstract:

This case study explores using self-efficacy theory in counseling deaf and hard of hearing students in one California school district. Self-efficacy is described as the confidence a student has for performing a set of skills required to succeed at a specific task. When students need to learn a skill, self-efficacy can be a major factor in influencing behavioral change. Self-efficacy is domain specific, meaning that students can have high confidence in their abilities to accomplish a task in one domain, while at the same time having low confidence in their abilities to accomplish another task in a different domain. The communication isolation experienced by deaf and hard of hearing children and adolescents can negatively impact their belief about their ability to navigate life challenges. There is a need to address issues that impact deaf and hard of hearing students’ social-emotional development. Failure to address these needs may result in depression, suicidal ideation, and anxiety among other mental health concerns. Self-efficacy training can be used to address these socio-emotional developmental issues with this population. Four sources of experiences are applied during an intervention: (a) enactive mastery experience, (b) vicarious experience, (c) verbal persuasion, and (d) physiological and affective states. This case study describes the use of self-efficacy training with a coed group of 12 deaf and hard of hearing high school students who experienced bullying at school. Beginning with enactive mastery experience, the counselor introduced the topic of bullying to the group. The counselor educated the students about the different types of bullying while teaching them the terminology, signs and their meanings. The most effective way to increase self-efficacy is through extensive practice. To better understand these concepts, the students practiced through role-playing with the goal of developing self-advocacy skills. Vicarious experience is the perception that students have about their capabilities. Viewing other students advocating for themselves, cognitively rehearsing what actions they will and will not take, and teaching each other how to stand up against bullying can strengthen their belief in successfully overcoming bullying. The third source of self-efficacy beliefs is verbal persuasion. It occurs when others express belief in the capabilities of the student. Didactic training and pedagogic materials on bullying were employed as part of the group counseling sessions. The fourth source of self-efficacy appraisals is physiological and affective states. Students expect positive emotions to be associated with successful skilled performance. When students practice new skills, the counselor can apply several strategies to enhance self-efficacy while reducing and controlling emotional and physical states. The intervention plan incorporated all four sources of self-efficacy training during several interactive group sessions regarding bullying. There was an increased understanding around the issues of bullying, resulting in the students’ belief of their ability to perform protective behaviors and deter future occurrences. The outcome of the intervention plan resulted in a reduction of reported bullying incidents. In conclusion, self-efficacy training can be an effective counseling and teaching strategy in addressing and enhancing the social-emotional functioning with deaf and hard of hearing adolescents.

Keywords: counseling, self-efficacy, bullying, social-emotional development, mental health, deaf and hard of hearing students

Procedia PDF Downloads 352
192 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine

Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska

Abstract:

The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: radial engine, ignition system, non-uniformity, combustion process

Procedia PDF Downloads 366
191 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals

Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen

Abstract:

Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.

Keywords: care, learning, nursing, technology

Procedia PDF Downloads 136
190 Online Faculty Professional Development: An Approach to the Design Process

Authors: Marie Bountrogianni, Leonora Zefi, Krystle Phirangee, Naza Djafarova

Abstract:

Faculty development is critical for any institution as it impacts students’ learning experiences and faculty performance with regards to course delivery. With that in mind, The Chang School at Ryerson University embarked on an initiative to develop a comprehensive, relevant faculty development program for online faculty and instructors. Teaching Adult Learners Online (TALO) is a professional development program designed to build capacity among online teaching faculty to enhance communication/facilitation skills for online instruction and establish a Community of Practice to allow for opportunities for online faculty to network and exchange ideas and experiences. TALO is comprised of four online modules and each module provides three hours of learning materials. The topics focus on online teaching and learning experience, principles and practices, opportunities and challenges in online assessments as well as course design and development. TALO offers a unique experience for online instructors who are placed in the role of a student and an instructor through interactivities involving discussions, hands-on assignments, peer mentoring while experimenting with technological tools available for their online teaching. Through exchanges and informal peer mentoring, a small interdisciplinary community of practice has started to take shape. Successful participants have to meet four requirements for completion: i) participate actively in online discussions and activities, ii) develop a communication plan for the course they are teaching, iii) design one learning activity/or media component, iv) design one online learning module. This study adopted a mixed methods exploratory sequential design. For the qualitative phase of this study, a thorough literature review was conducted on what constitutes effective faculty development programs. Based on that review, the design team identified desired competencies for online teaching/facilitation and course design. Once the competencies were identified, a focus group interview with The Chang School teaching community was conducted as a needs assessment and to validate the competencies. In the quantitative phase, questionnaires were distributed to instructors and faculty after the program was launched to continue ongoing evaluation and revisions, in hopes of further improving the program to meet the teaching community’s needs. Four faculty members participated in a one-hour focus group interview. Major findings from the focus group interview revealed that for the training program, faculty wanted i) to better engage students online, ii) to enhance their online teaching with specific strategies, iii) to explore different ways to assess students online. 91 faculty members completed the questionnaire in which findings indicated that: i) the majority of faculty stated that they gained the necessary skills to demonstrate instructor presence through communication and use of technological tools provided, ii) increased faculty confidence with course management strategies, iii) learning from peers is most effective – the Community of Practice is strengthened and valued even more as program alumni become facilitators. Although this professional development program is not mandatory for online instructors, since its launch in Fall 2014, over 152 online instructors have successfully completed the program. A Community of Practice emerged as a result of the program and participants continue to exchange thoughts and ideas about online teaching and learning.

Keywords: community of practice, customized, faculty development, inclusive design

Procedia PDF Downloads 175
189 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
188 Planning Fore Stress II: Study on Resiliency of New Architectural Patterns in Urban Scale

Authors: Amir Shouri, Fereshteh Tabe

Abstract:

Master planning and urban infrastructure’s thoughtful and sequential design strategies will play the major role in reducing the damages of natural disasters, war and or social/population related conflicts for cities. Defensive strategies have been revised during the history of mankind after having damages from natural depressions, war experiences and terrorist attacks on cities. Lessons learnt from Earthquakes, from 2 world war casualties in 20th century and terrorist activities of all times. Particularly, after Hurricane Sandy of New York in 2012 and September 11th attack on New York’s World Trade Centre (WTC) in 21st century, there have been series of serious collaborations between law making authorities, urban planners and architects and defence related organizations to firstly, getting prepared and/or prevent such activities and secondly, reduce the human loss and economic damages to minimum. This study will work on developing a model of planning for New York City, where its citizens will get minimum impacts in threat-full time with minimum economic damages to the city after the stress is passed. The main discussion in this proposal will focus on pre-hazard, hazard-time and post-hazard transformative policies and strategies that will reduce the “Life casualties” and will ease “Economic Recovery” in post-hazard conditions. This proposal is going to scrutinize that one of the key solutions in this path might be focusing on all overlaying possibilities on architectural platforms of three fundamental infrastructures, the transportation, the power related sources and defensive abilities on a dynamic-transformative framework that will provide maximum safety, high level of flexibility and fastest action-reaction opportunities in stressful periods of time. “Planning Fore Stress” is going to be done in an analytical, qualitative and quantitative work frame, where it will study cases from all over the world. Technology, Organic Design, Materiality, Urban forms, city politics and sustainability will be discussed in deferent cases in international scale. From the modern strategies of Copenhagen for living friendly with nature to traditional approaches of Indonesian old urban planning patterns, the “Iron Dome” of Israel to “Tunnels” in Gaza, from “Ultra-high-performance quartz-infused concrete” of Iran to peaceful and nature-friendly strategies of Switzerland, from “Urban Geopolitics” in cities, war and terrorism to “Design of Sustainable Cities” in the world, will all be studied with references and detailed look to analysis of each case in order to propose the most resourceful, practical and realistic solutions to questions on “New City Divisions”, “New City Planning and social activities” and “New Strategic Architecture for Safe Cities”. This study is a developed version of a proposal that was announced as winner at MoMA in 2013 in call for ideas for Rockaway after Sandy Hurricane took place.

Keywords: urban scale, city safety, natural disaster, war and terrorism, city divisions, architecture for safe cities

Procedia PDF Downloads 484
187 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 74
186 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 219
185 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 198
184 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control

Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul

Abstract:

Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).

Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning

Procedia PDF Downloads 149
183 The Association between Gene Polymorphisms of GPX, SEPP1, and SEP15, Plasma Selenium Levels, Urinary Total Arsenic Concentrations, and Prostate Cancer

Authors: Yu-Mei Hsueh, Wei-Jen Chen, Yung-Kai Huang, Cheng-Shiuan Tsai, Kuo-Cheng Yeh

Abstract:

Prostate cancer occurs in men over the age of 50, and rank sixth of the top ten cancers in Taiwan, and the incidence increased gradually over the past decade in Taiwan. Arsenic is confirmed as a carcinogen by International Agency for Research on (IARC). Arsenic induces oxidative stress may be a risk factor for prostate cancer, but the mechanism is not clear. Selenium is an important antioxidant element. Whether the association between plasma selenium levels and risk of prostate cancer are modified by different genotype of selenoprotein is still unknown. Glutathione peroxidase, selenoprotein P (SEPP1) and 15 kDa selenoprotein (SEP 15) are selenoprotein and regulates selenium transport and the oxidation and reduction reaction. However, the association between gene polymorphisms of selenoprotein and prostate cancer is not yet clear. The aim of this study is to determine the relationship between plasma selenium, polymorphism of selenoprotein, urinary total arsenic concentration and prostate cancer. This study is a hospital-based case-control study. Three hundred twenty-two cases of prostate cancer and age (±5 years) 1:1 matched 322 control group were recruited from National Taiwan University Hospital, Taipei Medical University Hospital, and Wan Fang Hospital. Well-trained personnel carried out standardized personal interviews based on a structured questionnaire. Information collected included demographic and socioeconomic characteristics, lifestyle and disease history. Blood and urine samples were also collected at the same time. The Research Ethics Committee of National Taiwan University Hospital, Taipei, Taiwan, approved the study. All patients provided informed consent forms before sample and data collection. Buffy coat was to extract DNA, and the polymerase chain reaction - restriction fragment length polymorphism (PCR-RFLP) was used to measure the genotypes of SEPP1 rs3797310, SEP15 rs5859, GPX1 rs1050450, GPX2 rs4902346, GPX3 rs4958872, and GPX4 rs2075710. Plasma concentrations of selenium were determined by inductively coupled plasma mass spectrometry (ICP-MS).Urinary arsenic species concentrations were measured by high-performance liquid chromatography links hydride generator and atomic absorption spectrometer (HPLC-HG-AAS). Subject with high education level compared to those with low educational level had a lower prostate cancer odds ratio (OR) Mainland Chinese and aboriginal people had a lower OR of prostate cancer compared to Fukien Taiwanese. After adjustment for age, educational level, subjects with GPX1 rs1050450 CT and TT genotype compared to the CC genotype have lower, OR of prostate cancer, the OR and 95% confidence interval (Cl) was 0.53 (0.31-0.90). SEPP1 rs3797310 CT+TT genotype compared to those with CC genotype had a marginally significantly lower OR of PC. The low levels of plasma selenium and the high urinary total arsenic concentrations had the high OR of prostate cancer in a significant dose-response manner, and SEPP1 rs3797310 genotype modified this joint association.

Keywords: prostate cancer, plasma selenium concentration, urinary total arsenic concentrations, glutathione peroxidase, selenoprotein P, selenoprotein 15, gene polymorphism

Procedia PDF Downloads 268
182 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 93