Search results for: adjustment orders
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 830

Search results for: adjustment orders

50 Adaptability in Older People: A Mixed Methods Approach

Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger

Abstract:

Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.

Keywords: adaptability, intervention study, mixed methods, nursing home residents

Procedia PDF Downloads 124
49 Lessons Learnt from Tutors’ Perspectives on Online Tutorial’s Policies in Open and Distance Education Institution

Authors: Durri Andriani, Irsan Tahar, Lilian Sarah Hiariey

Abstract:

Every institution has to develop, implement, and control its policies to ensure the effectiveness of the institution. In doing so, all related stakeholders have to be involved to maximize the benefit of the policies and minimize the potential constraints and resistances. Open and distance education (ODE) institution is no different. As an education institution, ODE institution has to focus their attention to fulfilling academic needs of their students through open and distance measures. One of them is quality learning support system. Significant stakeholders in learning support system are tutors since they are the ones who directly communicate with students. Tutors are commonly seen as objects whose main responsibility is limited to implementing policies decided by management in ODE institutions. Nonetheless, tutors’ perceptions of tutorials are believed to influence tutors’ performances in facilitating learning support. It is therefore important to analyze tutors’ perception on various aspects of learning support. This paper presents analysis of tutors’ perceptions on policies of tutoriala in ODE institution using Policy Analysis Framework (PAF) modified by King, Nugent, Russell, and Lacy. Focus of this paper is on on-line tutors, those who provide tutorials via Internet. On-line tutors were chosen to stress the increasingly important used of Internet in ODE system. The research was conducted in Universitas Terbuka (UT), Indonesia. UT is purposely selected because of its large number (1,234) of courses offered and large area coverage (6000 inhabited islands). These posed UT in a unique position where learning support system has, to some extent, to be standardized while at the same time it has to be able to cater the needs of different courses in different places for students with different backgrounds. All 598 listed on-line tutors were sent the research questionnaires. Around 20% of the email addresses could not be reached. Tutors were asked to fill out open-ended questionnaires on their perceptions on definition of on-line tutorial, roles of tutors and students in on-line tutorials, requirement for on-line tutors, learning materials, and student evaluation in on-line tutorial. Data analyzed was gathered from 40 on-line tutors who sent back filled-out questionnaires. Data were analyzed qualitatively using content analysis from all 40 tutors. The results showed that using PAF as entry point in choosing learning support services as area of policy with delivery learning materials as the issue at UT has been able to provide new insights of aspects need to be consider in formulating policies in online tutorial and in learning support services. Involving tutors as source of information could be proven to be productive. In general, tutors had clear understanding about definition of online tutorial, roles of tutors and roles of students, and requirement of tutor. Tutors just need to be more involved in the policy formulation since they could provide data on students and problem faced in online tutorial. However, tutors need an adjustment in student evaluation which according tutors too focus on administrative aspects and subjective.

Keywords: distance education, on-line tutorial, tutorial policy, tutors’ perspectives

Procedia PDF Downloads 229
48 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy

Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai

Abstract:

Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.

Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy

Procedia PDF Downloads 114
47 Nursery Treatments May Improve Restoration Outcomes by Reducing Seedling Transplant Shock

Authors: Douglas E. Mainhart, Alejandro Fierro-Cabo, Bradley Christoffersen, Charlotte Reemts

Abstract:

Semi-arid ecosystems across the globe have faced land conversion for agriculture and resource extraction activities, posing a threat to the important ecosystem services they provide. Revegetation-centered restoration efforts in these regions face low success rates due to limited soil water availability and high temperatures leading to elevated seedling mortality after planting. Typical methods to alleviate these stresses require costly post-planting interventions aimed at improving soil moisture status. We set out to evaluate the efficacy of applying in-nursery treatments to address transplant shock. Four native Tamaulipan thornscrub species were compared. Three treatments were applied: elevated CO2, drought hardening (four-week exposure each), and antitranspirant foliar spray (the day prior to planting). Our goal was to answer two primary questions: (1) Do treatments improve survival and growth of seedlings in the early period post-planting? (2) If so, what underlying physiological changes are associated with this improved performance? To this end, we measured leaf gas exchange (stomatal conductance, light saturated photosynthetic rate, water use efficiency), leaf morphology (specific leaf area), and osmolality before and upon the conclusion of treatments. A subset of seedlings from all treatments have been planted, which will be monitored in coming months for in-field survival and growth.First month field survival for all treatment groups were high due to ample rainfall following planting (>85%). Growth data was unreliable due to high herbivory (68% of all sampled plants). While elevated CO2 had infrequent or no detectable influence on all aspects of leaf gas exchange, drought hardening reduced stomatal conductance in three of the four species measured without negatively impacting photosynthesis. Both CO2 and drought hardening elevated leaf osmolality in two species. Antitranspirant application significantly reduced conductance in all species for up to four days and reduced photosynthesis in two species. Antitranspirants also increased the variability of water use efficiency compared to controls. Collectively, these results suggest that antitranspirants and drought hardening are viable treatments for reducing short-term water loss during the transplant shock period. Elevated CO2, while not effective at reducing water loss, may be useful for promoting more favorable water status via osmotic adjustment. These practices could improve restoration outcomes in Tamaulipan thornscrub and other semi-arid systems. Further research should focus on evaluating combinations of these treatments and their species-specific viability.

Keywords: conservation, drought conditioning, semi-arid restoration, plant physiology

Procedia PDF Downloads 62
46 The Political Economy of Media Privatisation in Egypt: State Mechanisms and Continued Control

Authors: Mohamed Elmeshad

Abstract:

During the mid-1990's Egypt had become obliged to implement the Economic Reform and Structural Adjustment Program that included broad economic liberalization, expansion of the private sector and a contraction the size of government spending. This coincided as well with attempts to appear more democratic and open to liberalizing public space and discourse. At the same time, economic pressures and the proliferation of social media access and activism had led to increased pressure to open a mediascape and remove it from the clutches of the government, which had monopolized print and broadcast mass media for over 4 decades by that point. However, the mechanisms that governed the privatization of mass media allowed for sustained government control, even through the prism of ostensibly privately owned newspapers and television stations. These mechanisms involve barriers to entry from a financial and security perspective, as well as operational capacities of distribution and access to means of production. The power dynamics between mass media establishments and the state were moulded during this period in a novel way. Power dynamics within media establishments had also formed under such circumstances. The changes in the country's political economy itself somehow mirrored these developments. This paper will examine these dynamics and shed light on the political economy of Egypt's newly privatized mass media in the early 2000's especially. Methodology: This study will rely on semi-structured interviews from individuals involved with these changes from the perspective of the media organizations. It also will map out the process of media privatization by looking at the administrative, operative and legislative institutions and contexts in order to attempt to draw conclusions on methods of control and the role of the state during the process of privatization. Finally, a brief discourse analysis will be necessary in order to aptly convey how these factors ultimately reflected on media output. Findings and conclusion: The development of Egyptian private, “independent” mirrored the trajectory of transitions in the country’s political economy. Liberalization of the economy meant that a growing class of business owners would explore opportunities that such new markets would offer. However the regime’s attempts to control access to certain forms of capital, especially in sectors such as the media affected the structure of print and broadcast media, as well as the institutions that would govern them. Like the process of liberalisation, much of the regime’s manoeuvring with regards to privatization of media had been haphazardly used to indirectly expand the regime and its ruling party’s ability to retain influence, while creating a believable façade of openness. In this paper, we will attempt to uncover these mechanisms and analyse our findings in ways that explain how the manifestations prevalent in the context of a privatizing media space in a transitional Egypt provide evidence of both the intentions of this transition, and the ways in which it was being held back.

Keywords: business, mass media, political economy, power, privatisation

Procedia PDF Downloads 209
45 „Real and Symbolic in Poetics of Multiplied Screens and Images“

Authors: Kristina Horvat Blazinovic

Abstract:

In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.

Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time

Procedia PDF Downloads 144
44 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface

Authors: Renata Gerhardt, Detlev Belder

Abstract:

Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.

Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS

Procedia PDF Downloads 222
43 Gold-Mediated Modification of Apoferritin Surface with Targeting Antibodies

Authors: Simona Dostalova, Pavel Kopel, Marketa Vaculovicova, Vojtech Adam, Rene Kizek

Abstract:

Protein apoferritin seems to be a very promising structure for use as a nanocarrier. It is prepared from intracellular ferritin protein naturally found in most organisms. The role of ferritin proteins is to store and transport ferrous ions. Apoferritin is a hollow protein cage without ferrous ions that can be prepared from ferritin by reduction with thioglycolic acid or dithionite. The structure of apoferritin is composed of 24 protein subunits, creating a sphere with 12 nm in diameter. The inner cavity has a diameter of 8 nm. The drug encapsulation process is based on the response of apoferritin structure to the pH changes of surrounding solution. In low pH, apoferritin is disassembled into individual subunits and its structure is “opened”. It can then be mixed with any desired cytotoxic drug and after adjustment of pH back to neutral the subunits are reconnected again and the drug is encapsulated within the apoferritin particles. Excess drug molecules can be removed by dialysis. The receptors for apoferritin, SCARA5 and TfR1 can be found in the membrane of both healthy and cancer cells. To enhance the specific targeting of apoferritin nanocarrier, it is possible to modify its surface with targeting moieties, such as antibodies. To ensure sterically correct complex, we used a a peptide linker based on a protein G with N-terminus affinity towards Fc region of antibodies. To connect the peptide to the surface of apoferritin, the C-terminus of peptide was made of cysteine with affinity to gold. The surface of apoferritin with encapsulated doxorubicin (ApoDox) was coated either with gold nanoparticles (ApoDox-Nano) or gold (III) chloride hydrate reduced with sodium borohydride (ApoDox-HAu). The applied amount of gold in form of gold (III) chloride hydrate was 10 times higher than in the case of gold nanoparticles. However, after removal of the excess unbound ions by electrophoretic separation, the concentration of gold on the surface of apoferritin was only 6 times higher for ApoDox-HAu in comparison with ApoDox-Nano. Moreover, the reduction with sodium borohydride caused a loss of doxorubicin fluorescent properties (excitation maximum at 480 nm with emission maximum at 600 nm) and thus its biological activity. Fluorescent properties of ApoDox-Nano were similar to the unmodified ApoDox, therefore it was more suited for the intended use. To evaluate the specificity of apoferritin modified with antibodies, we used ELISA-like method with the surface of microtitration plate wells coated by the antigen (goat anti-human IgG antibodies). To these wells, we applied ApoDox without targeting antibodies and ApoDox-Nano modified with targeting antibodies (human IgG antibodies). The amount of unmodified ApoDox on antigen after incubation and subsequent rinsing with water was 5 times lower than in the case of ApoDox-Nano modified with targeting antibodies. The modification of non-gold ApoDox with antibodies caused no change in its targeting properties. It can therefore be concluded that the demonstrated procedure allows us to create nanocarrier with enhanced targeting properties, suitable for nanomedicine.

Keywords: apoferritin, doxorubicin, nanocarrier, targeting antibodies

Procedia PDF Downloads 364
42 A Reusable Foundation Solution for Onshore Windmills

Authors: Wael Mohamed, Per-Erik Austrell, Ola Dahlblom

Abstract:

Wind farms repowering is a significant topic nowadays. Wind farms repowering means the complete dismantling of the existing turbine, tower and foundation at an existing site and replacing these units with taller and larger units. Modern wind turbines are designed to withstand approximately for 20~25 years. However, a very long design life of 100 years or more can be expected for high-quality concrete foundations. Based on that there are significant economic and environmental benefits of replacing the out-of-date wind turbine with a new turbine of better power generation capacity and reuse the foundation. The big difference in lifetime shows a potential for new foundation solution to allow wind farms to be updated with taller and larger units in order to increase the energy production. This also means a significant change in the design loads on the foundations. Therefore, the new foundation solution should be able to handle the additional overturning loads. A raft surrounded by an active stabilisation system is proposed in this study. The concept of an active stabilisation system is a novel idea using a movable load to stabilise against the overturning moment. The active stabilisation system consists of a water tank being divided into eight compartments. The system uses the water as a movable load by pumping it into two compartments to stabilise against the overturning moment. The position of the water will rely on the wind direction and a water movement system depending on a number of electric motors and pipes with electric valves is used. One of the advantages of this active foundation solution is that some cost-efficient adjustment could be done to make this foundation able to support larger and taller units. After the end of the first turbine lifetime, an option is presented here to reuse this foundation and make it able to support taller and larger units. This option is considered using extra water volume to fill four compartments instead of two compartments. This extra water volume will increase the stability moment by 41% compared to using water in two compartments. The geotechnical performance of the new foundation solution is investigated using two existing weak soil profiles in Egypt and Sweden. A comparative study of the new solution and a piled raft with long friction piles is performed using finite element simulations. The results show that using a raft surrounded by an active stabilisation system decreases the tilting compared to a piled raft with friction piles. Moreover, it is found that using a raft surrounded by an active stabilisation system decreases the foundation costs compared to a piled raft with friction piles. In term of the environmental impact, it is found that the new foundation has a beneficial impact on the CO2 emissions. It saves roughly from 296.1 tonnes-CO2 to 518.21 tonnes-CO2 from the manufacture of concrete if the new foundation solution is used for another turbine-lifetime.

Keywords: active stabilisation system, CO2 emissions, FE analysis, reusable, weak soils

Procedia PDF Downloads 195
41 Eco-Politics of Infrastructure Development in and Around Protected Areas in Kenya: The Case of Nairobi National Park

Authors: Teresa Wanjiru Mbatia

Abstract:

On 7th June 2011, the government Minister of Roads in Kenya announced the proposed construction of a major highway known as a southern bypass to run on the northern border of the Nairobi National Park. The following day on 8th June 2011, the chairperson of the Friends of Nairobi National Park (FONNAP) posted a protest statement on their website, with the heading, ‘Nairobi Park is Not a cake’ alerting its members and conservation groups, with the aim of getting support to the campaign against the government’s intention to hive off a section of the park for road construction. This was the first and earliest statement that led to a series of other events that culminated in conservationists and some other members of the public campaign against the government’s plan to hive off sections of the park to build road and railway infrastructure in or around the park. Together with other non-state actors, mostly non-governmental organisations in conservation/environment and tourism businesses, FoNNAP issued a series of other statements on social, print and electronic media to battle against road and railway construction. This paper examined the strategies, outcomes and interests of actors involved in opposing/proposing the development of transport infrastructure in and around the Nairobi National Park. Specifically, the objectives were to analyse the: (1) Arguments put forward by the eco-warriors to protest infrastructure development; (2) Background and interests of the eco-warriors; (3) Needs/interests and opinions of ordinary common citizens on transport infrastructural development, particularly in and around the urban nature reserve and (4) Final outcomes of the eco-politics surrounding infrastructure development in and around Nairobi National Park. The methodological approach used was environmental history and the social construction of nature. The study collected combined qualitative data using four main approaches, the grounded theory approach, narratives, case studies and a phenomenological approach. The information collected was analysed using critical discourse analysis. The major findings of the study were that under the guise of “public participation,” influential non-state actors have the capacity to perpetuate social-spatial inequalities in the form of curtailing the majority from accessing common public goods. A case in point in this study is how the efforts of powerful conservationists, environmentalists, and tourism businesspersons managed to stall the construction of much-needed road and railway infrastructure severally through litigations in lengthy environmental court processes involving injunctions and stop orders to the government bodies in charge. Moreover, powerful non-state actors were found to have formed informal and sometimes formal coalitions with politicians with selfish interests, which serves to deepen the exclusionary practices and the common good. The study concludes that mostly composed of certain types of elites (NGOs, business communities, politicians and privileged social-cultural groups), non-state actors have used participatory policies to advance their own interests at the expense of the majority whom they claim to represent. These practices are traced to the historically unjust social, political, and economic forces involved in the production of space in Nairobi.

Keywords: eco-politics, exclusion, infrastructure, Nairobi national park, non-state actors, protests

Procedia PDF Downloads 154
40 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment

Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov

Abstract:

Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.

Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity

Procedia PDF Downloads 167
39 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships

Authors: Vijaya Dixit Aasheesh Dixit

Abstract:

Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.

Keywords: learning curve, materials management, shipbuilding, sister ships

Procedia PDF Downloads 476
38 Insights on the Halal Status of Antineoplastic and Immunomodulating Agents and Nutritional and Dietary Supplements in Malaysia

Authors: Suraiya Abdul Rahman, Perasna M. Varma, Amrahi Buang, Zhari Ismail, Wan Rosalina W. Rosli, Ahmad Rashidi M. Tahir

Abstract:

Background: Muslims has the obligation to ensure that everything they consume including medicines should be halal. With the growing demands for halal medicines in October 2012, Malaysia has launched the world's first Halal pharmaceutical standards called Malaysian Standard MS 2424:2012 Halal Pharmaceuticals-General Guidelines to serve as a basic requirement for halal pharmaceuticals in Malaysia. However, the biggest challenge faced by pharmaceutical companies to comply is finding the origin or source of the ingredients and determine their halal status. Aim: This study aims to determine the halal status of the antineoplastic and immunomodulating agents, and nutritional and dietary supplements by analysing the origin of their active pharmaceutical ingredients (API) and excipients to provide an insight on the common source and halal status of pharmaceutical ingredients and an indication on adjustment required in order to be halal compliance. Method: The ingredients of each product available in a government hospital in central of Malaysia and their sources were determined from the product package leaflets, information obtained from manufacturer, reliable websites and standard pharmaceutical references. The ingredients were categorised as halal, musbooh or haram based on the definition set in MS2424. Results: There were 162 medications included in the study where 123 (76%) were under the antineoplastic and immunomodulating agents group, while 39 (24%) were nutritional and dietary supplements. In terms of the medication halal status, the proportion of halal, musbooh and haram were 40.1% (n=65), 58.6% (n=95) and 1.2% (n=2) respectively. With regards to the API, there were 89 (52%) different active ingredient identified for antineoplastic and immunomodulating agents with the proportion of 89.9% (n=80) halal and 10.1% (n=9) were mushbooh. There were 83 (48%) active ingredient from the nutritional and dietary supplements group with proportion of halal and masbooh were 89.2% (n=74) and 10.8% (n=9) respectively. No haram APIs were identified in all therapeutic classes. There were a total of 176 excipients identified from the products ranges. It was found that majority of excipients are halal with the proportion of halal, masbooh and haram were at 82.4% (n=145), 17% (n=30) and 0.6% (n=1) respectively. With regards of the sources of the excipeints, most of masbooh excipients (76.7%, n = 23) were classified as masbooh because they have multiple possible origin which consist of animals, plant or others. The remaining 13.3% and 10% were classified as masbooh due to their ethanol and land animal origin respectively. The one haram excipient was gelatine of bovine-porcine origin. Masbooh ingredients found in this research were glycerol, tallow, lactose, polysorbate, dibasic sodium phosphate, stearic acid and magnesium stearate. Ethanol, gelatine, glycerol and magnesium stearate were the most common ingredients classified as mushbooh. Conclusion: This study shows that most API and excipients are halal. However the majority of the medicines in these products categories are mushbooh due to certain excipients only, which could be replaced with halal alternative excipients. This insight should encourage the pharmaceutical products manufacturers to go for halal certification to meet the increasing demand for Halal certified medications for the benefit of mankind.

Keywords: antineoplastic and immunomodulation agents, halal pharmaceutical, MS2424, nutritional and dietary supplements

Procedia PDF Downloads 277
37 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces

Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava

Abstract:

Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.

Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection

Procedia PDF Downloads 191
36 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 309
35 Creative Resolutions to Intercultural Conflicts: The Joint Effects of International Experience and Cultural Intelligence

Authors: Thomas Rockstuhl, Soon Ang, Kok Yee Ng, Linn Van Dyne

Abstract:

Intercultural interactions are often challenging and fraught with conflicts. To shed light on how to interact effectively across cultures, academics and practitioners alike have advanced a plethora of intercultural competence models. However, the majority of this work has emphasized distal outcomes, such as job performance and cultural adjustment, rather than proximal outcomes, such as how individuals resolve inevitable intercultural conflicts. As a consequence, the processes by which individuals negotiate challenging intercultural conflicts are not well understood. The current study advances theorizing on intercultural conflict resolution by exploring antecedents of how people resolve intercultural conflicts. To this end, we examine creativity – the generation of novel and useful ideas – in the context of resolving cultural conflicts in intercultural interactions. Based on the dual-identity theory of creativity, we propose that individuals with greater international experience will display greater creativity and that the relationship is accentuated by individual’s cultural intelligence. Two studies test these hypotheses. The first study comprises 84 senior university students, drawn from an international organizational behavior course. The second study replicates findings from the first study in a sample of 89 executives from eleven countries. Participants in both studies provided protocols of their strategies for resolving two intercultural conflicts, as depicted in two multimedia-vignettes of challenging intercultural work-related interactions. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded all strategies for their novelty and usefulness following scoring procedures for creativity tasks. Participants also completed online surveys of demographic background information, including their international experience, and cultural intelligence. Hierarchical linear modeling showed that surprisingly, while international experience is positively associated with usefulness, it is unrelated to novelty. Further, a person’s cultural intelligence strengthens the positive effect of international experience on usefulness and mitigates the effect of international experience on novelty. Theoretically, our findings offer an important theoretical extension to the dual-identity theory of creativity by identifying cultural intelligence as an important individual difference moderator that qualifies the relationship between international experience and creative conflict resolution. In terms of novelty, individuals higher in cultural intelligence seem less susceptible to rigidity effects of international experiences. Perhaps they are more capable of assessing which aspects of culture are relevant and apply relevant experiences when they brainstorm novel ideas. For utility, individuals high in cultural intelligence are better able to leverage on their international experience to assess the viability of their ideas because their richer and more organized cultural knowledge structure allows them to assess possible options more efficiently and accurately. In sum, our findings suggest that cultural intelligence is an important and promising intercultural competence that fosters creative resolutions to intercultural conflicts. We hope that our findings stimulate future research on creativity and conflict resolution in intercultural contexts.

Keywords: cultural Intelligence, intercultural conflict, intercultural creativity, international experience

Procedia PDF Downloads 132
34 Official Game Account Analysis: Factors Influence Users' Judgments in Limited-Word Posts

Authors: Shanhua Hu

Abstract:

Social media as a critical propagandizing form of film, video games, and digital products has received substantial research attention, but there exists several critical barriers such as: (1) few studies exploring the internal and external connections of a product as part of the multimodal context that gives rise to readability and commercial return; (2) the lack of study of multimodal analysis in product’s official account of game publishers and its impact on users’ behaviors including purchase intention, social media engagement, and playing time; (3) no standardized ecologically-valid, game type-varying data can be used to study the complexity of official account’s postings within a time period. This proposed research helps to tackle these limitations in order to develop a model of readability study that is more ecologically valid, robust, and thorough. To accomplish this objective, this paper provides a more diverse dataset comprising different visual elements and messages collected from the official Twitter accounts of the Top 20 best-selling games of 2021. Video game companies target potential users through social media, a popular approach is to set up an official account to maintain exposure. Typically, major game publishers would create an official account on Twitter months before the game's release date to update on the game's development, announce collaborations, and reveal spoilers. Analyses of tweets from those official Twitter accounts would assist publishers and marketers in identifying how to efficiently and precisely deploy advertising to increase game sales. The purpose of this research is to determine how official game accounts use Twitter to attract new customers, specifically which types of messages are most effective at increasing sales. The dataset includes the number of days until the actual release date on Twitter posts, the readability of the post (Flesch Reading Ease Score, FRES), the number of emojis used, the number of hashtags, the number of followers of the mentioned users, the categorization of the posts (i.e., spoilers, collaborations, promotions), and the number of video views. The timeline of Twitter postings from official accounts will be compared to the history of pre-orders and sales figures to determine the potential impact of social media posts. This study aims to determine how the above-mentioned characteristics of official accounts' Twitter postings influence the sales of the game and to examine the possible causes of this influence. The outcome will provide researchers with a list of potential aspects that could influence people's judgments in limited-word posts. With the increased average online time, users would adapt more quickly than before in online information exchange and readings, such as the word to use sentence length, and the use of emojis or hashtags. The study on the promotion of official game accounts will not only enable publishers to create more effective promotion techniques in the future but also provide ideas for future research on the influence of social media posts with a limited number of words on consumers' purchasing decisions. Future research can focus on more specific linguistic aspects, such as precise word choice in advertising.

Keywords: engagement, official account, promotion, twitter, video game

Procedia PDF Downloads 53
33 The Development of the Psychosomatic Nursing Model from an Evidence-Based Action Research on Proactive Mental Health Care for Medical Inpatients

Authors: Chia-Yi Wu, Jung-Chen Chang, Wen-Yu Hu, Ming-Been Lee

Abstract:

In nearly all physical health conditions, suicide risk is increased compared to healthy people even after adjustment for age, gender, mental health, and substance use diagnoses. In order to highlight the importance of suicide risk assessment for the inpatients and early identification and engagement for inpatients’ mental health problems, a study was designed aiming at developing a comprehensive psychosomatic nursing engagement (PSNE) model with standardized operation procedures informing how nurses communicate, assess, and engage with the inpatients with emotional distress. The purpose of the study was to promote the gatekeeping role of clinical nurses in performing brief assessment and interventions to detect depression and anxiety symptoms among the inpatients, particularly in non-psychiatric wards. The study will be carried out in a 2000-bed university hospital in Northern Taiwan in 2019. We will select a ward for trial and develop feasible procedures and in-job training course for the nurses to offer mental health care, which will also be validated through professional consensus meeting. The significance of the study includes the following three points: (1) The study targets at an important but less-researched area of PSNE model in the cultural background of Taiwan, where hospital service is highly accessible, but mental health and suicide risk assessment are hardly provided by non-psychiatric healthcare personnel. (2) The issue of PSNE could be efficient and cost-effective in the identification of suicide risks at an early stage to prevent inpatient suicide or to reduce future suicide risk by early treatment of mental illnesses among the high-risk group of hospitalized patients who are more than three-times lethal to suicide. (3) Utilizing a brief tool with its established APP ('The Five-item Brief Symptom Rating Scale, BSRS-5'), we will invent the standardized procedure of PSNE and referral steps in collaboration with the medical teams across the study hospital. New technological tools nested within nursing assessment/intervention will concurrently be invented to facilitate better care quality. The major outcome measurements will include tools for early identification of common mental distress and suicide risks, i.e., the BSRS-5, revised BSRS-5, and the 9-item Concise Mental Health Checklist (CMHC-9). The main purpose of using the CMHC-9 in clinical suicide risk assessment is mainly to provide care and build-up therapeutic relationship with the client, so it will also be used to nursing training highlighting the skills of supportive care. Through early identification of the inpatients’ depressive symptoms or other mental health care needs such as insomnia, anxiety, or suicide risk, the majority of the nursing clinicians would be able to engage in critical interventions that alleviate the inpatients’ suffering from mental health problems, given a feasible nursing input.

Keywords: mental health care, clinical outcome improvement, clinical nurses, suicide prevention, psychosomatic nursing

Procedia PDF Downloads 89
32 Enhancing Engineering Students Educational Experience: Studying Hydrostatic Pumps Association System in Fluid Mechanics Laboratories

Authors: Alexandre Daliberto Frugoli, Pedro Jose Gabriel Ferreira, Pedro Americo Frugoli, Lucio Leonardo, Thais Cavalheri Santos

Abstract:

Laboratory classes in Engineering courses are essential for students to be able to integrate theory with practical reality, by handling equipment and observing experiments. In the researches of physical phenomena, students can learn about the complexities of science. Over the past years, universities in developing countries have been reducing the course load of engineering courses, in accordance with cutting cost agendas. Quality education is the object of study for researchers and requires educators and educational administrators able to demonstrate that the institutions are able to provide great learning opportunities at reasonable costs. Didactic test benches are indispensable equipment in educational activities related to turbo hydraulic pumps and pumping facilities study, which have a high cost and require long class time due to measurements and equipment adjustment time. In order to overcome the aforementioned obstacles, aligned with the professional objectives of an engineer, GruPEFE - UNIP (Research Group in Physics Education for Engineering - Universidade Paulista) has developed a multi-purpose stand for the discipline of fluid mechanics which allows the study of velocity and flow meters, loads losses and pump association. In this work, results obtained by the association in series and in parallel of hydraulic pumps will be presented and discussed, mainly analyzing the repeatability of experimental procedures and their agreement with the theory. For the association in series two identical pumps were used, consisting of the connection of the discharge of a pump to the suction of the next one, allowing the fluid to receive the power of all machines in the association. The characteristic curve of the set is obtained from the curves of each of the pumps, by adding the heads corresponding to the same flow rates. The same pumps were associated in parallel. In this association, the discharge piping is common to the two machines together. The characteristic curve of the set was obtained by adding to each value of H (head height), the flow rates of each pump. For the tests, the input and output pressure of each pump were measured. For each set there were three sets of measurements, varying the flow rate in range from 6.0 to 8.5 m 3 / h. For the two associations, the results showed an excellent repeatability with variations of less than 10% between sets of measurements and also a good agreement with the theory. This variation agrees with the instrumental uncertainty. Thus, the results validate the use of the fluids bench designed for didactic purposes. As a future work, a digital acquisition system is being developed, using differential sensors of extremely low pressures (2 to 2000 Pa approximately) for the microcontroller Arduino.

Keywords: engineering education, fluid mechanics, hydrostatic pumps association, multi-purpose stand

Procedia PDF Downloads 202
31 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 189
30 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 91
29 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 265
28 Utilization of Informatics to Transform Clinical Data into a Simplified Reporting System to Examine the Analgesic Prescribing Practices of a Single Urban Hospital’s Emergency Department

Authors: Rubaiat S. Ahmed, Jemer Garrido, Sergey M. Motov

Abstract:

Clinical informatics (CI) enables the transformation of data into a systematic organization that improves the quality of care and the generation of positive health outcomes.Innovative technology through informatics that compiles accurate data on analgesic utilization in the emergency department can enhance pain management in this important clinical setting. We aim to establish a simplified reporting system through CI to examine and assess the analgesic prescribing practices in the EDthrough executing a U.S. federal grant project on opioid reduction initiatives. Queried data points of interest from a level-one trauma ED’s electronic medical records were used to create data sets and develop informational/visual reporting dashboards (on Microsoft Excel and Google Sheets) concerning analgesic usage across several pre-defined parameters and performance metrics using CI. The data was then qualitatively analyzed to evaluate ED analgesic prescribing trends by departmental clinicians and leadership. During a 12-month reporting period (Dec. 1, 2020 – Nov. 30, 2021) for the ongoing project, about 41% of all ED patient visits (N = 91,747) were for pain conditions, of which 81.6% received analgesics in the ED and at discharge (D/C). Of those treated with analgesics, 24.3% received opioids compared to 75.7% receiving opioid alternatives in the ED and at D/C, including non-pharmacological modalities. Demographics showed among patients receiving analgesics, 56.7% were aged between 18-64, 51.8% were male, 51.7% were white, and 66.2% had government funded health insurance. Ninety-one percent of all opioids prescribed were in the ED, with intravenous (IV) morphine, IV fentanyl, and morphine sulfate immediate release (MSIR) tablets accounting for 88.0% of ED dispensed opioids. With 9.3% of all opioids prescribed at D/C, MSIR was dispensed 72.1% of the time. Hydrocodone, oxycodone, and tramadol usage to only 10-15% of the time, and hydromorphone at 0%. Of opioid alternatives, non-steroidal anti-inflammatory drugs were utilized 60.3% of the time, 23.5% with local anesthetics and ultrasound-guided nerve blocks, and 7.9% with acetaminophen as the primary non-opioid drug categories prescribed by ED providers. Non-pharmacological analgesia included virtual reality and other modalities. An average of 18.5 ED opioid orders and 1.9 opioid D/C prescriptions per 102.4 daily ED patient visits was observed for the period. Compared to other specialties within our institution, 2.0% of opioid D/C prescriptions are given by ED providers, compared to the national average of 4.8%. Opioid alternatives accounted for 69.7% and 30.3% usage, versus 90.7% and 9.3% for opioids in the ED and D/C, respectively.There is a pressing need for concise, relevant, and reliable clinical data on analgesic utilization for ED providers and leadership to evaluate prescribing practices and make data-driven decisions. Basic computer software can be used to create effective visual reporting dashboards with indicators that convey relevant and timely information in an easy-to-digest manner. We accurately examined our ED's analgesic prescribing practices using CI through dashboard reporting. Such reporting tools can quickly identify key performance indicators and prioritize data to enhance pain management and promote safe prescribing practices in the emergency setting.

Keywords: clinical informatics, dashboards, emergency department, health informatics, healthcare informatics, medical informatics, opioids, pain management, technology

Procedia PDF Downloads 121
27 Geotechnical Challenges for the Use of Sand-sludge Mixtures in Covers for the Rehabilitation of Acid-Generating Mine Sites

Authors: Mamert Mbonimpa, Ousseynou Kanteye, Élysée Tshibangu Ngabu, Rachid Amrou, Abdelkabir Maqsoud, Tikou Belem

Abstract:

The management of mine wastes (waste rocks and tailings) containing sulphide minerals such as pyrite and pyrrhotite represents the main environmental challenge for the mining industry. Indeed, acid mine drainage (AMD) can be generated when these wastes are exposed to water and air. AMD is characterized by low pH and high concentrations of heavy metals, which are toxic to plants, animals, and humans. It affects the quality of the ecosystem through water and soil pollution. Different techniques involving soil materials can be used to control AMD generation, including impermeable covers (compacted clays) and oxygen barriers. The latter group includes covers with capillary barrier effects (CCBE), a multilayered cover that include the moisture retention layer playing the role of an oxygen barrier. Once AMD is produced at a mine site, it must be treated so that the final effluent at the mine site complies with regulations and can be discharged into the environment. Active neutralization with lime is one of the treatment methods used. This treatment produces sludge that is usually stored in sedimentation ponds. Other sludge management alternatives have been examined in recent years, including sludge co-disposal with tailings or waste rocks, disposal in underground mine excavations, and storage in technical landfill sites. Considering the ability of AMD neutralization sludge to maintain an alkaline to neutral pH for decades or even centuries, due to the excess alkalinity induced by residual lime within the sludge, valorization of sludge in specific applications could be an interesting management option. If done efficiently, the reuse of sludge could free up storage ponds and thus reduce the environmental impact. It should be noted that mixtures of sludge and soils could potentially constitute usable materials in CCBE for the rehabilitation of acid-generating mine sites, while sludge alone is not suitable for this purpose. The high sludge water content (up to 300%), even after sedimentation, can, however, constitute a geotechnical challenge. Adding lime to the mixtures can reduce the water content and improve the geotechnical properties. The objective of this paper is to investigate the impact of the sludge content (30, 40 and 50%) in sand-sludge mixtures (SSM) on their hydrogeotechnical properties (compaction, shrinkage behaviour, saturated hydraulic conductivity, and water retention curve). The impact of lime addition (dosages from 2% to 6%) on the moisture content, dry density after compaction and saturated hydraulic conductivity of SSM was also investigated. Results showed that sludge adding to sand significantly improves the saturated hydraulic conductivity and water retention capacity, but the shrinkage increased with sludge content. The dry density after compaction of lime-treated SSM increases with the lime dosage but remains lower than the optimal dry density of the untreated mixtures. The saturated hydraulic conductivity of lime-treated SSM after 24 hours of cure decreases by 3 orders of magnitude. Considering the hydrogeotechnical properties obtained with these mixtures, it would be possible to design CCBE whose moisture retention layer is made of SSM. Physical laboratory models confirmed the performance of such CCBE.

Keywords: mine waste, AMD neutralization sludge, sand-sludge mixture, hydrogeotechnical properties, mine site reclamation, CCBE

Procedia PDF Downloads 21
26 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis

Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin

Abstract:

Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.

Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis

Procedia PDF Downloads 241
25 The Underground Ecosystem of Credit Card Frauds

Authors: Abhinav Singh

Abstract:

Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.

Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem

Procedia PDF Downloads 411
24 Assessment of Very Low Birth Weight Neonatal Tracking and a High-Risk Approach to Minimize Neonatal Mortality in Bihar, India

Authors: Aritra Das, Tanmay Mahapatra, Prabir Maharana, Sridhar Srikantiah

Abstract:

In the absence of adequate well-equipped neonatal-care facilities serving rural Bihar, India, the practice of essential home-based newborn-care remains critically important for reduction of neonatal and infant mortality, especially among pre-term and small-for-gestational-age (Low-birth-weight) newborns. To improve the child health parameters in Bihar, ‘Very-Low-Birth-Weight (vLBW) Tracking’ intervention is being conducted by CARE India, since 2015, targeting public facility-delivered newborns weighing ≤2000g at birth, to improve their identification and provision of immediate post-natal care. To assess the effectiveness of the intervention, 200 public health facilities were randomly selected from all functional public-sector delivery points in Bihar and various outcomes were tracked among the neonates born there. Thus far, one pre-intervention (Feb-Apr’2015-born neonates) and three post-intervention (for Sep-Oct’2015, Sep-Oct’2016 and Sep-Oct’2017-born children) follow-up studies were conducted. In each round, interviews were conducted with the mothers/caregivers of successfully-tracked children to understand outcome, service-coverage and care-seeking during the neonatal period. Data from 171 matched facilities common across all rounds were analyzed using SAS-9.4. Identification of neonates with birth-weight ≤ 2000g improved from 2% at baseline to 3.3%-4% during post-intervention. All indicators pertaining to post-natal home-visits by frontline-workers (FLWs) improved. Significant improvements between baseline and post-intervention rounds were also noted regarding mothers being informed about ‘weak’ child – at the facility (R1 = 25 to R4 = 50%) and at home by FLW (R1 = 19%, to R4 = 30%). Practice of ‘Kangaroo-Mother-Care (KMC)’– an important component of essential newborn care – showed significant improvement in postintervention period compared to baseline in both facility (R1 = 15% to R4 = 31%) and home (R1 = 10% to R4=29%). Increasing trend was noted regarding detection and birth weight-recording of the extremely low-birth-weight newborns (< 1500 g) showed an increasing trend. Moreover, there was a downward trend in mortality across rounds, in each birth-weight strata (< 1500g, 1500-1799g and >= 1800g). After adjustment for the differential distribution of birth-weights, mortality was found to decline significantly from R1 (22.11%) to R4 (11.87%). Significantly declining trend was also observed for both early and late neonatal mortality and morbidities. Multiple regression analysis identified - birth during immediate post-intervention phase as well as that during the maintenance phase, birth weight > 1500g, children of low-parity mothers, receiving visit from FLW in the first week and/or receiving advice on extra care from FLW as predictors of survival during neonatal period among vLBW newborns. vLBW tracking was found to be a successful and sustainable intervention and has already been handed over to the Government.

Keywords: weak newborn tracking, very low birth weight babies, newborn care, community response

Procedia PDF Downloads 130
23 Effects of a Cluster Grouping of Gifted and Twice Exceptional Students on Academic Motivation, Socio-emotional Adjustment, and Life Satisfaction

Authors: Line Massé, Claire Baudry, Claudia Verret, Marie-France Nadeau, Anne Brault-Labbé

Abstract:

Little research has been conducted on educational services adapted for twice exceptional students. Within an action research, a cluster grouping was set up in an elementary school in Quebec, bringing together gifted or doubly exceptional (2E) students (n = 11) and students not identified as gifted (n = 8) within a multilevel class (3ᵣ𝒹 and 4ₜₕ years). 2E students had either attention deficit hyperactivity disorder (n = 8, including 3 with specific learning disability) or autism spectrum disorder (n = 2). Differentiated instructions strategies were implemented, including the possibility of progressing at their own pace of learning, independent study or research projects, flexible accommodation, tutoring with older students and the development of socio-emotional learning. A specialized educator also supported the teacher in the class for behavioural and socio-affective aspects. Objectives: The study aimed to assess the impacts of the grouping on all students, their academic motivation, and their socio-emotional adaptation. Method: A mixed method was used, combining a qualitative approach with a quantitative approach. Semi-directed interviews were conducted with students (N = 18, 4 girls and 14 boys aged 8 to 9) and one of their parents (N = 18) at the end of the school year. Parents and students completed two questionnaires at the beginning and end of the school year: the Behavior Assessment System for Children-3, children or parents versions (BASC-3, Reynolds and Kampus, 2015) and the Academic Motivation in Education (Vallerand et al., 1993). Parents also completed the Multidimensional Student Life Satisfaction Scale (Huebner, 1994, adapted by Fenouillet et al., 2014) comprising three domains (school, friendships, and motivation). Mixed thematic analyzes were carried out on the data from the interviews using the N'Vivo software. Related-samples Wilcoxon rank-sums tests were conducted for the data from the questionnaires. Results: Different themes emerge from the students' comments, including a positive impact on school motivation or attitude toward school, improved school results, reduction of their behavioural difficulties and improvement of their social relations. These remarks were more frequent among 2E students. Most 2E students also noted an improvement in their academic performance. Most parents reported improvements in attitudes toward school and reductions in disruptive behaviours in the classroom. Some parents also observed changes in behaviours at home or in the socio-emotional well-being of their children, here again, particularly parents of 2E children. Analysis of questionnaires revealed significant differences at the end of the school year, more specifically pertaining to extrinsic motivation identified, problems of conduct, attention, emotional self-control, executive functioning, negative emotions, functional deficiencies, and satisfaction regarding friendships. These results indicate that this approach could benefit not only gifted and doubly exceptional students but also students not identified as gifted.

Keywords: Cluster grouping, elementary school, giftedness, mixed methods, twice exceptional students

Procedia PDF Downloads 52
22 Design Aspects for Developing a Microfluidics Diagnostics Device Used for Low-Cost Water Quality Monitoring

Authors: Wenyu Guo, Malachy O’Rourke, Mark Bowkett, Michael Gilchrist

Abstract:

Many devices for real-time monitoring of surface water have been developed in the past few years to provide early warning of pollutions and so to decrease the risk of environmental pollution efficiently. One of the most common methodologies used in the detection system is a colorimetric process, in which a container with fixed volume is filled with target ions and reagents to combine a colorimetric dye. The colorimetric ions can sensitively absorb a specific-wavelength radiation beam, and its absorbance rate is proportional to the concentration of the fully developed product, indicating the concentration of target nutrients in the pre-mixed water samples. In order to achieve precise and rapid detection effect, channels with dimensions in the order of micrometers, i.e., microfluidic systems have been developed and introduced into these diagnostics studies. Microfluidics technology largely reduces the surface to volume ratios and decrease the samples/reagents consumption significantly. However, species transport in such miniaturized channels is limited by the low Reynolds numbers in the regimes. Thus, the flow is extremely laminar state, and diffusion is the dominant mass transport process all over the regimes of the microfluidic channels. The objective of this present work has been to analyse the mixing effect and chemistry kinetics in a stop-flow microfluidic device measuring Nitride concentrations in fresh water samples. In order to improve the temporal resolution of the Nitride microfluidic sensor, we have used computational fluid dynamics to investigate the influence that the effectiveness of the mixing process between the sample and reagent within a microfluidic device exerts on the time to completion of the resulting chemical reaction. This computational approach has been complemented by physical experiments. The kinetics of the Griess reaction involving the conversion of sulphanilic acid to a diazonium salt by reaction with nitrite in acidic solution is set in the Laminar Finite-rate chemical reaction in the model. Initially, a methodology was developed to assess the degree of mixing of the sample and reagent within the device. This enabled different designs of the mixing channel to be compared, such as straight, square wave and serpentine geometries. Thereafter, the time to completion of the Griess reaction within a straight mixing channel device was modeled and the reaction time validated with experimental data. Further simulations have been done to compare the reaction time to effective mixing within straight, square wave and serpentine geometries. Results show that square wave channels can significantly improve the mixing effect and provides a low standard deviations of the concentrations of nitride and reagent, while for straight channel microfluidic patterns the corresponding values are 2-3 orders of magnitude greater, and consequently are less efficiently mixed. This has allowed us to design novel channel patterns of micro-mixers with more effective mixing that can be used to detect and monitor levels of nutrients present in water samples, in particular, Nitride. Future generations of water quality monitoring and diagnostic devices will easily exploit this technology.

Keywords: nitride detection, computational fluid dynamics, chemical kinetics, mixing effect

Procedia PDF Downloads 180
21 Optimization of Territorial Spatial Functional Partitioning in Coal Resource-based Cities Based on Ecosystem Service Clusters - The Case of Gujiao City in Shanxi Province

Authors: Gu Sihao

Abstract:

The coordinated development of "ecology-production-life" in cities has been highly concerned by the country, and the transformation development and sustainable development of resource-based cities have become a hot research topic at present. As an important part of China's resource-based cities, coal resource-based cities have the characteristics of large number and wide distribution. However, due to the adjustment of national energy structure and the gradual exhaustion of urban coal resources, the development vitality of coal resource-based cities is gradually reduced. In many studies, the deterioration of ecological environment in coal resource-based cities has become the main problem restricting their urban transformation and sustainable development due to the "emphasis on economy and neglect of ecology". Since the 18th National Congress of the Communist Party of China (CPC), the Central Government has been deepening territorial space planning and development. On the premise of optimizing territorial space development pattern, it has completed the demarcation of ecological protection red lines, carried out ecological zoning and ecosystem evaluation, which have become an important basis and scientific guarantee for ecological modernization and ecological civilization construction. Grasp the regional multiple ecosystem services is the precondition of the ecosystem management, and the relationship between the multiple ecosystem services study, ecosystem services cluster can identify the interactions between multiple ecosystem services, and on the basis of the characteristics of the clusters on regional ecological function zoning, to better Social-Ecological system management. Based on this cognition, this study optimizes the spatial function zoning of Gujiao, a coal resource-based city, in order to provide a new theoretical basis for its sustainable development. This study is based on the detailed analysis of characteristics and utilization of Gujiao city land space, using SOFM neural networks to identify local ecosystem service clusters, according to the cluster scope and function of ecological function zoning of space partition balance and coordination between different ecosystem services strength, establish a relationship between clusters and land use, and adjust the functions of territorial space within each zone. Then, according to the characteristics of coal resources city and national spatial function zoning characteristics, as the driving factors of land change, by cellular automata simulation program, such as simulation under different restoration strategy situation of urban future development trend, and provides relevant theories and technical methods for the "third-line" demarcations of Gujiao's territorial space planning, optimizes territorial space functions, and puts forward targeted strategies for the promotion of regional ecosystem services, providing theoretical support for the improvement of human well-being and sustainable development of resource-based cities.

Keywords: coal resource-based city, territorial spatial planning, ecosystem service cluster, gmop model, geosos-FLUS model, functional zoning optimization and upgrading

Procedia PDF Downloads 38