Search results for: parallel operating generators
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3592

Search results for: parallel operating generators

472 Reliable and Error-Free Transmission through Multimode Polymer Optical Fibers in House Networks

Authors: Tariq Ahamad, Mohammed S. Al-Kahtani, Taisir Eldos

Abstract:

Optical communications technology has made enormous and steady progress for several decades, providing the key resource in our increasingly information-driven society and economy. Much of this progress has been in finding innovative ways to increase the data carrying capacity of a single optical fiber. In this research article we have explored basic issues in terms of security and reliability for secure and reliable information transfer through the fiber infrastructure. Conspicuously, one potentially enormous source of improvement has however been left untapped in these systems: fibers can easily support hundreds of spatial modes, but today’s commercial systems (single-mode or multi-mode) make no attempt to use these as parallel channels for independent signals. Bandwidth, performance, reliability, cost efficiency, resiliency, redundancy, and security are some of the demands placed on telecommunications today. Since its initial development, fiber optic systems have had the advantage of most of these requirements over copper-based and wireless telecommunications solutions. The largest obstacle preventing most businesses from implementing fiber optic systems was cost. With the recent advancements in fiber optic technology and the ever-growing demand for more bandwidth, the cost of installing and maintaining fiber optic systems has been reduced dramatically. With so many advantages, including cost efficiency, there will continue to be an increase of fiber optic systems replacing copper-based communications. This will also lead to an increase in the expertise and the technology needed to tap into fiber optic networks by intruders. As ever before, all technologies have been subject to hacking and criminal manipulation, fiber optics is no exception. Researching fiber optic security vulnerabilities suggests that not everyone who is responsible for their networks security is aware of the different methods that intruders use to hack virtually undetected into fiber optic cables. With millions of miles of fiber optic cables stretching across the globe and carrying information including but certainly not limited to government, military, and personal information, such as, medical records, banking information, driving records, and credit card information; being aware of fiber optic security vulnerabilities is essential and critical. Many articles and research still suggest that fiber optics is expensive, impractical and hard to tap. Others argue that it is not only easily done, but also inexpensive. This paper will briefly discuss the history of fiber optics, explain the basics of fiber optic technologies and then discuss the vulnerabilities in fiber optic systems and how they can be better protected. Knowing the security risks and knowing the options available may save a company a lot embarrassment, time, and most importantly money.

Keywords: in-house networks, fiber optics, security risk, money

Procedia PDF Downloads 405
471 35 MHz Coherent Plane Wave Compounding High Frequency Ultrasound Imaging

Authors: Chih-Chung Huang, Po-Hsun Peng

Abstract:

Ultrasound transient elastography has become a valuable tool for many clinical diagnoses, such as liver diseases and breast cancer. The pathological tissue can be distinguished by elastography due to its stiffness is different from surrounding normal tissues. An ultrafast frame rate of ultrasound imaging is needed for transient elastography modality. The elastography obtained in the ultrafast system suffers from a low quality for resolution, and affects the robustness of the transient elastography. In order to overcome these problems, a coherent plane wave compounding technique has been proposed for conventional ultrasound system which the operating frequency is around 3-15 MHz. The purpose of this study is to develop a novel beamforming technique for high frequency ultrasound coherent plane-wave compounding imaging and the simulated results will provide the standards for hardware developments. Plane-wave compounding imaging produces a series of low-resolution images, which fires whole elements of an array transducer in one shot with different inclination angles and receives the echoes by conventional beamforming, and compounds them coherently. Simulations of plane-wave compounding image and focused transmit image were performed using Field II. All images were produced by point spread functions (PSFs) and cyst phantoms with a 64-element linear array working at 35MHz center frequency, 55% bandwidth, and pitch of 0.05 mm. The F number is 1.55 in all the simulations. The simulated results of PSFs and cyst phantom which were obtained using single, 17, 43 angles plane wave transmission (angle of each plane wave is separated by 0.75 degree), and focused transmission. The resolution and contrast of image were improved with the number of angles of firing plane wave. The lateral resolutions for different methods were measured by -10 dB lateral beam width. Comparison of the plane-wave compounding image and focused transmit image, both images exhibited the same lateral resolution of 70 um as 37 angles were performed. The lateral resolution can reach 55 um as the plane-wave was compounded 47 angles. All the results show the potential of using high-frequency plane-wave compound imaging for realizing the elastic properties of the microstructure tissue, such as eye, skin and vessel walls in the future.

Keywords: plane wave imaging, high frequency ultrasound, elastography, beamforming

Procedia PDF Downloads 515
470 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 131
469 Experimental Evaluation of Contact Interface Stiffness and Damping to Sustain Transients and Resonances

Authors: Krystof Kryniski, Asa Kassman Rudolphi, Su Zhao, Per Lindholm

Abstract:

ABB offers range of turbochargers from 500 kW to 80+ MW diesel and gas engines. Those operate on ships, power stations, generator-sets, diesel locomotives and large, off-highway vehicles. The units need to sustain harsh operating conditions, exposure to high speeds, temperatures and varying loads. They are expected to work at over-critical speeds damping effectively any transients and encountered resonances. Components are often connected via friction joints. Designs of those interfaces need to account for surface roughness, texture, pre-stress, etc. to sustain against fretting fatigue. The experience from field contributed with valuable input on components performance in hash sea environment and their exposure to high temperature, speed and load conditions. Study of tribological interactions of oxide formations provided an insight into dynamic activities occurring between the surfaces. Oxidation was recognized as the dominant factor of a wear. Microscopic inspections of fatigue cracks on turbine indicated insufficient damping and unrestrained structural stress leading to catastrophic failure, if not prevented in time. The contact interface exhibits strongly non-linear mechanism and to describe it the piecewise approach was used. Set of samples representing the combinations of materials, texture, surface and heat treatment were tested on a friction rig under range of loads, frequencies and excitation amplitudes. Developed numerical technique extracted the friction coefficient, tangential contact stiffness and damping. Vast amount of experimental data was processed with the multi-harmonics balance (MHB) method to categorize the components subjected to the periodic excitations. At the pre-defined excitation level both force and displacement formed semi-elliptical hysteresis curves having the same area and secant as the actual ones. By cross-correlating the terms remaining in the phase and out of the phase, respectively it was possible to separate an elastic energy from dissipation and derive the stiffness and damping characteristics.

Keywords: contact interface, fatigue, rotor-dynamics, torsional resonances

Procedia PDF Downloads 361
468 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 113
467 Adaptability in Older People: A Mixed Methods Approach

Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger

Abstract:

Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.

Keywords: adaptability, intervention study, mixed methods, nursing home residents

Procedia PDF Downloads 129
466 Regional Rates of Sand Supply to the New South Wales Coast: Southeastern Australia

Authors: Marta Ribo, Ian D. Goodwin, Thomas Mortlock, Phil O’Brien

Abstract:

Coastal behavior is best investigated using a sediment budget approach, based on the identification of sediment sources and sinks. Grain size distribution over the New South Wales (NSW) continental shelf has been widely characterized since the 1970’s. Coarser sediment has generally accumulated on the outer shelf, and/or nearshore zones, with the latter related to the presence of nearshore reef and bedrocks. The central part of the NSW shelf is characterized by the presence of fine sediments distributed parallel to the coastline. This study presents new grain size distribution maps along the NSW continental shelf, built using all available NSW and Commonwealth Government holdings. All available seabed bathymetric data form prior projects, single and multibeam sonar, and aerial LiDAR surveys were integrated into a single bathymetric surface for the NSW continental shelf. Grain size information was extracted from the sediment sample data collected in more than 30 studies. The information extracted from the sediment collections varied between reports. Thus, given the inconsistency of the grain size data, a common grain size classification was her defined using the phi scale. The new sediment distribution maps produced, together with new detailed seabed bathymetric data enabled us to revise the delineation of sediment compartments to more accurately reflect the true nature of sediment movement on the inner shelf and nearshore. Accordingly, nine primary mega coastal compartments were delineated along the NSW coast and shelf. The sediment compartments are bounded by prominent nearshore headlands and reefs, and major river and estuarine inlets that act as sediment sources and/or sinks. The new sediment grain size distribution was used as an input in the morphological modelling to quantify the sediment transport patterns (and indicative rates of transport), used to investigate sand supply rates and processes from the lower shoreface to the NSW coast. The rate of sand supply to the NSW coast from deep water is a major uncertainty in projecting future coastal response to sea-level rise. Offshore transport of sand is generally expected as beaches respond to rising sea levels but an onshore supply from the lower shoreface has the potential to offset some of the impacts of sea-level rise, such as coastline recession. Sediment exchange between the lower shoreface and sub-aerial beach has been modelled across the south, central, mid-north and far-north coast of NSW. Our model approach is that high-energy storm events are the primary agents of sand transport in deep water, while non-storm conditions are responsible for re-distributing sand within the beach and surf zone.

Keywords: New South Wales coast, off-shore transport, sand supply, sediment distribution maps

Procedia PDF Downloads 216
465 Algal/Bacterial Membrane Bioreactor for Bioremediation of Chemical Industrial Wastewater Containing 1,4 Dioxane

Authors: Ahmed Tawfik

Abstract:

Oxidation of 1,4 dioxane produces metabolites by-products involving glycolaldehyde and acids that have geno- and cytotoxicity impact on microbial degradation. Thereby, the incorporation of algae with bacteria in the treatment system would eliminate and overcome the accumulation of metabolites that are utilized as a carbon source for the build-up of biomass. Therefore, the aim of the present study is to assess the potential of algae/bacteria-based membrane bioreactor (AB-MBR) for biodegradation of 1,4 dioxane-rich wastewater at a high imposed loading rate. Three identical reactors, i.e., AB-MBR1, AB-MBR2, and AB-MBR3, were operated in parallel at 1,4 dioxane loading rates of 641.7, 320.9, and 160.4 mg/L. d., and HRTs of 6.0, 12 and 24 h. respectively. The AB-MBR1 achieved 1,4 dioxane removal rate of 263.7 mg/L.d., where the residual value in the treated effluent amounted to 94.4±22.9 mg/L. Reducing the 1,4 dioxane loading rate (LR) to 320.9 mg/L.d in the AB-MBR2 maximized the removal rate efficiency of 265.9 mg/L.d., with a removal efficiency of 82.8±3.2%. The minimum value of 1,4 dioxane of 17.3±1.8 mg/L in the treated effluent of AB-MBR3 was obtained at an HRT of 24.0 h and loading rate of 160.4 mg/L.d. The mechanism of 1,4 dioxane degradation in AB-MBR was a combination of volatilization (8.03±0.6%), UV oxidation (14.1±0.9%), microbial biodegradation (49.1±3.9%) and absorption/uptake and assimilation by algae (28.8±2.%). Further, the Thioclava, Afipia, and Mycobacterium genera oxidized and produced the required enzymes for hydrolysis and cleavage of the dioxane ring into 2-hydroxy-1,4 dioxane. Moreover, the fungi, i.e., Basidiomycota and Cryptomycota, played a big role in the degradation of the 1,4 dioxane into 2-hydroxy-1,4 dioxane. Xanthobacter and Mesorhizobium were involved in the metabolism process by secreting alcohol dehydrogenase (ADH), aldehyde dehydrogenase (ALDH), and glycolate oxidase. Bacteria and fungi produced dehydrogenase (DH) for the transformation of 2-hydroxy-1,4 dioxane into 2-hydroxy-ethoxyacetaldehyde. The latter is converted into Ethylene glycol by Aldehyde hydrogenase (ALDH). Ethylene glycol is oxidized into acids using Alcohol hydrogenase (ADH). The Diatomea, Chlorophyta, and Streptophyta utilize the metabolites for biomass assimilation and produce the required oxygen for further oxidation of the dioxane and its metabolites by-products of bacteria and fungi. The major portion of metabolites (ethylene glycol, glycolic acid, and oxalic acid were removed due to uptake and absorption by algae (43±4.3%), followed by adsorption (18.4±0.9%). The volatilization and UV oxidation contribution for the degradation of metabolites were 8.7±0.7% and 12.3±0.8%, respectively. The capabilities of genera Defluviimonas, Thioclava, Luteolibacter, and Afipia. The genera of Defluviimonas, Thioclava, Luteolibacter, and Mycobacterium were grown under a high 1,4 dioxane LR of 641.7 mg/L.d. The Chlorophyta (4.1-43.6%), Streptophyta (2.5-21.7%), and Diatomea (0.8-1.4%) phyla were dominant for degradation of 1,4 dioxane. The results of this study strongly demonstrated that the bioremediation and bioaugmentation process can safely remove 1,4 dioxane from industrial wastewater while minimizing environmental concerns and reducing economic costs.

Keywords: wastewater, membrane bioreactor, bacterial community, algal community

Procedia PDF Downloads 32
464 Solar-Thermal-Electric Stirling Engine-Powered System for Residential Units

Authors: Florian Misoc, Cyril Okhio, Joshua Tolbert, Nick Carlin, Thomas Ramey

Abstract:

This project is focused on designing a Stirling engine system for a solar-thermal-electrical system that can supply electric power to a single residential unit. Since Stirling engines are heat engines operating any available heat source, is notable for its ability to generate clean and reliable energy without emissions. Due to the need of finding alternative energy sources, the Stirling engines are making a comeback with the recent technologies, which include thermal energy conservation during the heat transfer process. Recent reviews show mounting evidence and positive test results that Stirling engines are able to produce constant energy supply that ranges from 5kW to 20kW. Solar Power source is one of the many uses for Stirling engines. Using solar energy to operate Stirling engines is an idea considered by many researchers, due to the ease of adaptability of the Stirling engine. In this project, the Stirling engine developed was designed and tested to operate from biomass source of energy, i.e., wood pellets stove, during low solar radiation, with good results. A 20% efficiency of the engine was estimated, and 18% efficiency was measured, making it suitable and appropriate for residential applications. The effort reported was aimed at exploring parameters necessary to design, build and test a ‘Solar Powered Stirling Engine (SPSE)’ using Water (H₂O) as the Heat Transfer medium, with Nitrogen as the working gas that can reach or exceed an efficiency of 20%. The main objectives of this work consisted in: converting a V-twin cylinder air compressor into an alpha-type Stirling engine, construct a Solar Water Heater, by using an automotive radiator as the high-temperature reservoir for the Stirling engine, and an array of fixed mirrors that concentrate the solar radiation on the automotive radiator/high-temperature reservoir. The low-temperature reservoir is the surrounding air at ambient temperature. This work has determined that a low-cost system is sufficiently efficient and reliable. Off-the-shelf components have been used and estimates of the ability of the Engine final design to meet the electricity needs of small residence have been determined.

Keywords: stirling engine, solar-thermal, power inverter, alternator

Procedia PDF Downloads 257
463 Barriers to Entry: The Pitfall of Charter School Accountability

Authors: Ian Kingsbury

Abstract:

The rapid expansion of charter schools (public schools that receive government but do not face the same regulations as traditional public schools) over the preceding two decades has raised concerns over the potential for graft and fraud. These concerns are largely justified: Incidents of financial crime and mismanagement are not unheard of, and the charter sector has become a darling of hedge fund managers. In response, several states have strengthened their charter school regulatory regimes. Imposing regulations and attempting to increase accountability seem like sensible measures, and perhaps they are necessary. However, increased regulation may come at the cost of imposing barriers to entry. Specifically, increased regulation often entails evidence for a high likelihood of fiscal solvency. That should theoretically entail access to capital in the short-term, which may systematically preclude Black or Hispanic applicants from opening charter schools. Moreover, increased regulation necessarily entails more red tape. The institutional wherewithal and the number of hours required to complete an application to open a charter school might favor those who have partnered with an education service provider, specifically a charter management organization (CMO) or education management organization (EMO). These potential barriers to entry pose a significant policy concern. Just as policymakers hope to increase the share of minority teachers and principals, they should sensibly care whether individuals who open charter schools look like the students in that school. Moreover, they might be concerned if successful applications in states with stringent regulations are overwhelmingly affiliated with education service providers. One of the original missions of charter schools was to serve as a laboratory of innovation. Approving only those applications affiliated with education service providers (and in effect establishing a parallel network of schools rather than a diverse marketplace of schools) undermines that mission. Data and methods: The analysis examines more than 2,000 charter school applications from 15 states. It compares the outcomes of applications from states with a strong regulatory environment (those with high scores) from NACSA-the National Association of Charter School Authorizers- to applications from states with a weak regulatory environment (those with a low NACSA score). If the hypothesis is correct, applicants not affiliated with an ESP are more likely to be rejected in high-regulation states compared to those affiliated with an ESP, and minority candidates not affiliated with an education service provider (ESP) are particularly likely to be rejected. Initial returns indicate that the hypothesis holds. More applications in low NASCA-scoring Arizona come from individuals not associated with an ESP, and those individuals are as likely to be accepted as those affiliated with an ESP. On the other hand, applicants in high-NACSA scoring Indiana and Ohio are more than 20 percentage points more likely to be accepted if they are affiliated with an ESP, and the effect is particularly pronounced for minority candidates. These findings should spur policymakers to consider the drawbacks of charter school accountability and consider accountability regimes that do not impose barriers to entry.

Keywords: accountability, barriers to entry, charter schools, choice

Procedia PDF Downloads 135
462 Isolate-Specific Variations among Clinical Isolates of Brucella Identified by Whole-Genome Sequencing, Bioinformatics and Comparative Genomics

Authors: Abu S. Mustafa, Mohammad W. Khan, Faraz Shaheed Khan, Nazima Habibi

Abstract:

Brucellosis is a zoonotic disease of worldwide prevalence. There are at least four species and several strains of Brucella that cause human disease. Brucella genomes have very limited variation across strains, which hinder strain identification using classical molecular techniques, including PCR and 16 S rDNA sequencing. The aim of this study was to perform whole genome sequencing of clinical isolates of Brucella and perform bioinformatics and comparative genomics analyses to determine the existence of genetic differences across the isolates of a single Brucella species and strain. The draft sequence data were generated from 15 clinical isolates of Brucella melitensis (biovar 2 strain 63/9) using MiSeq next generation sequencing platform. The generated reads were used for further assembly and analysis. All the analysis was performed using Bioinformatics work station (8 core i7 processor, 8GB RAM with Bio-Linux operating system). FastQC was used to determine the quality of reads and low quality reads were trimmed or eliminated using Fastx_trimmer. Assembly was done by using Velvet and ABySS softwares. The ordering of assembled contigs was performed by Mauve. An online server RAST was employed to annotate the contigs assembly. Annotated genomes were compared using Mauve and ACT tools. The QC score for DNA sequence data, generated by MiSeq, was higher than 30 for 80% of reads with more than 100x coverage, which suggested that data could be utilized for further analysis. However when analyzed by FastQC, quality of four reads was not good enough for creating a complete genome draft so remaining 11 samples were used for further analysis. The comparative genome analyses showed that despite sharing same gene sets, single nucleotide polymorphisms and insertions/deletions existed across different genomes, which provided a variable extent of diversity to these bacteria. In conclusion, the next generation sequencing, bioinformatics, and comparative genome analysis can be utilized to find variations (point mutations, insertions and deletions) across different genomes of Brucella within a single strain. This information could be useful in surveillance and epidemiological studies supported by Kuwait University Research Sector grants MI04/15 and SRUL02/13.

Keywords: brucella, bioinformatics, comparative genomics, whole genome sequencing

Procedia PDF Downloads 359
461 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation

Authors: Joanna Gerszon, Aleksandra Rodacka

Abstract:

In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).

Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation

Procedia PDF Downloads 148
460 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 60
459 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices

Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu

Abstract:

Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.

Keywords: anthropometry, childhood obesity, gender, morbid obesity

Procedia PDF Downloads 307
458 The Organization of Multi-Field Hospital’s Work Environment in the Republic of Sakha, Yakutia

Authors: Inna Vinokurova, N. Savvina

Abstract:

The goal of research: to study the organization of multi-field hospital’s work environment in the Republic of Sakha (Yakutia), Autonomous public health care institution of Republic of Sakha (Yakutia) - Republican Hospital No. 1 - National Center of Medicine. Results: Autonomous public health care institution of Republic of Sakha (Yakutia) - Republican Hospital No. 1 - National Center of Medicine is a multidisciplinary, specialized hospital complex that provides specialized and high-tech medical care to children and adults in the Republic of Sakha (Yakutia) of the Russian Federation. There are 5 diagnostic and treatment centers (advisory and diagnostic, clinical, pediatric, perinatal, Republican cardiologic dispensary) with 45 clinical specialized departments with 727 cots, 5 resuscitation departments, 20 operating rooms and out-patient department with 905 visits in alternation in the National Center of Medicine. Annually more than 20,000 patients receive treatment in the hospital of the Republican Hospital of the Republic of Sakha (Yakutia), more than 70,000 patients visit out-patient sections, more than 2 million researches are done, more than 12,000 surgeries are performed, more than 2 thousand babies are delivered. National Center of Medicine has a great influence with such population’s health indicators as total mortality, birth rate, maternal, infant and perinatal mortality, circulatory system incidence. The work environment of the Republican Hospital of the Republic of Sakha (Yakutia) is represented by the following structural departments: pharmacy, blood transfusion department, sterilization department, laundry, dietetic department, infant-feeding centre, material and technical supply. More than 200 employees work in this service. The main function of these services is to provide on-time and fail-safe supply with all necessary: wear parts, medical supplies, donated blood and its components, foodstuffs, hospital linen , sterile instruments, etc. Thus, the activity of medical organization depends on the work environment, including quality health care, so it is a main part of multi-field hospital activity.

Keywords: organization of multi-field hospital’s, work environment, quality health care, pharmacy, blood transfusion department, sterilization department

Procedia PDF Downloads 227
457 Model Organic Ranikin Cycle Power Plant for Waste Heat Recovery in Olkaria-I Geothermal Power Plant

Authors: Haile Araya Nigusse, Hiram M. Ndiritu, Robert Kiplimo

Abstract:

Energy consumption is an indispensable component for the continued development of the human population. The global energy demand increases with development and population rise. The increase in energy demand, high cost of fossil fuels and the link between energy utilization and environmental impacts have resulted in the need for a sustainable approach to the utilization of the low grade energy resources. The Organic Rankine Cycle (ORC) power plant is an advantageous technology that can be applied in generation of power from low temperature brine of geothermal reservoirs. The power plant utilizes a low boiling organic working fluid such as a refrigerant or a hydrocarbon. Researches indicated that the performance of ORC power plant is highly dependent upon factors such as proper organic working fluid selection, types of heat exchangers (condenser and evaporator) and turbine used. Despite a high pressure drop, shell-tube heat exchangers have satisfactory performance for ORC power plants. This study involved the design, fabrication and performance assessment of the components of a model Organic Rankine Cycle power plant to utilize the low grade geothermal brine. Two shell and tube heat exchangers (evaporator and condenser) and a single stage impulse turbine have been designed, fabricated and the performance assessment of each component has been conducted. Pentane was used as a working fluid and hot water simulating the geothermal brine. The results of the experiment indicated that the increase in mass flow rate of hot water by 0.08 kg/s caused a rise in overall heat transfer coefficient of the evaporator by 17.33% and the heat transferred was increased by 6.74%. In the condenser, the increase of cooling water flow rate from 0.15 kg/s to 0.35 kg/s increased the overall heat transfer coefficient by 1.21% and heat transferred was increased by 4.26%. The shaft speed varied from 1585 to 4590 rpm as inlet pressure was varied from 0.5 to 5.0 bar and power generated was varying from 4.34 to 14.46W. The results of the experiments indicated that the performance of each component of the model Organic Rankine Cycle power plant operating at low temperature heat resources was satisfactory.

Keywords: brine, heat exchanger, ORC, turbine

Procedia PDF Downloads 630
456 Investigation of a Novel Dual Band Microstrip/Waveguide Hybrid Antenna Element

Authors: Raoudane Bouziyan, Kawser Mohammad Tawhid

Abstract:

Microstrip antennas are low in profile, light in weight, conformable in structure and are now developed for many applications. The main difficulty of the microstrip antenna is its narrow bandwidth. Several modern applications like satellite communications, remote sensing, and multi-function radar systems will find it useful if there is dual-band antenna operating from a single aperture. Some applications require covering both transmitting and receiving frequency bands which are spaced apart. Providing multiple antennas to handle multiple frequencies and polarizations becomes especially difficult if the available space is limited as with airborne platforms and submarine periscopes. Dual band operation can be realized from a single feed using slot loaded or stacked microstrip antenna or two separately fed antennas sharing a common aperture. The former design, when used in arrays, has certain limitations like complicated beam forming or diplexing network and difficulty to realize good radiation patterns at both the bands. The second technique provides more flexibility with separate feed system as beams in each frequency band can be controlled independently. Another desirable feature of a dual band antenna is easy adjustability of upper and lower frequency bands. This thesis presents investigation of a new dual-band antenna, which is a hybrid of microstrip and waveguide radiating elements. The low band radiator is a Shorted Annular Ring (SAR) microstrip antenna and the high band radiator is an aperture antenna. The hybrid antenna is realized by forming a waveguide radiator in the shorted region of the SAR microstrip antenna. It is shown that the upper to lower frequency ratio can be controlled by the proper choice of various dimensions and dielectric material. Operation in both linear and circular polarization is possible in either band. Moreover, both broadside and conical beams can be generated in either band from this antenna element. Finite Element Method based software, HFSS and Method of Moments based software, FEKO were employed to perform parametric studies of the proposed dual-band antenna. The antenna was not tested physically. Therefore, in most cases, both HFSS and FEKO were employed to corroborate the simulation results.

Keywords: FEKO, HFSS, dual band, shorted annular ring patch

Procedia PDF Downloads 386
455 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Authors: Jaeyoung Lee

Abstract:

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network

Procedia PDF Downloads 110
454 How Childhood Trauma Changes the Recovery Models

Authors: John Michael Weber

Abstract:

The following research results spanned six months and 175 people addicted to some form of substance, from alcohol to heroin. One question was asked, and the answers were amazing and consistent. The following work is the detailed results of this writer’s answer to his own question and the 175 that followed. A constant pattern took shape throughout the bio-psycho-social assessments, these addicts had “first memories,” the memories were vivid and took place between the ages of three to six years old, to a person those first memories were traumatic. This writer’s personal search into his childhood was not to find an excuse for the way he became, but to explain the reason for becoming an addict. To treat addiction, these memories that have caused Post Traumatic Stress Disorder (PTSD), must be recognized as the catalyst that sparked a predisposition. Cognitive Behavioral Therapy (CBT), integrated with treatment specifically focused on PTSD, gives the addict a better chance at recovery sans relapse. This paper seeks to give the findings of first memories of the addicts assessed and provide the best treatment plan for such an addict, considering, the childhood trauma in congruence with treatment of the Substance Use Disorder (SUD). The question posed was concerning what their first life memory wa It is the hope of this author to take the knowledge that trauma is one of the main catalysts for addiction, will allow therapists to provide better treatment and reduce relapse from abstinence from drugs and alcohol. This research led this author to believe that if treatment of childhood trauma is not a priority, the twelve steps of Alcoholics Anonymous, specifically steps 4 and 5, will not be thoroughly addressed and odds for relapse increase. With this knowledge, parents can be educated on childhood trauma and the effect it has on their children. Parents could be mindful of the fact that the things they perceive as traumatic, do not match what a child, in the developmental years, absorbs as traumatic. It is this author’s belief that what has become the status quo in treatment facilities has not been working for a long time. It is for that reason this author believes things need to change. Relapse has been woven into the fabric of standard operating procedure and that, in this authors view, is not necessary. Childhood Trauma is not being addressed early in recovery and that creates an environment of inevitable relapse. This paper will explore how to break away from the status -quo and rethink the current “evidencebased treatments.” To begin breaking away from status-quo, this ends the Abstract, with hopes an interest has been peaked to read on.

Keywords: childood, trauma, treatment, addiction, change

Procedia PDF Downloads 62
453 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines

Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky

Abstract:

Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.

Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods

Procedia PDF Downloads 99
452 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator

Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya

Abstract:

Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.

Keywords: DTL, focusing, PMQ, proton, rate earth magnets

Procedia PDF Downloads 456
451 Batch and Dynamic Investigations on Magnesium Separation by Ion Exchange Adsorption: Performance and Cost Evaluation

Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Eman S. Sayed

Abstract:

Ion exchange adsorption has a long standing history of success for seawater softening and selective ion removal from saline sources. Strong, weak and mixed types ion exchange systems could be designed and optimized for target separation. In this paper, different types of adsorbents comprising zeolite 13X and kaolin, in addition to, poly acrylate/zeolite (AZ), poly acrylate/kaolin (AK) and stand-alone poly acrylate (A) hydrogel types were prepared via microwave (M) and ultrasonic (U) irradiation techniques. They were characterized using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). The developed adsorbents were evaluated on bench scale level and based on assessment results, a composite bed has been formulated for performance evaluation in pilot scale column investigations. Owing to the hydrogel nature of the partially crosslinked poly acrylate, the developed adsorbents manifested a swelling capacity of about 50 g/g. The pilot trials have been carried out using magnesium enriched Red Seawater to simulate Red Seawater desalination brine. Batch studies indicated varying uptake efficiencies, where Mg adsorption decreases according to the following prepared hydrogel types AU>AM>AKM>AKU>AZM>AZU, being 108, 107, 78, 69, 66 and 63 mg/g, respectively. Composite bed adsorbent tested in the up-flow mode column studies indicated good performance for Mg uptake. For an operating cycle of 12 h, the maximum uptake during the loading cycle approached 92.5-100 mg/g, which is comparable to the performance of some commercial resins. Different regenerants have been explored to maximize regeneration and minimize the quantity of regenerants including 15% NaCl, 0.1 M HCl and sodium carbonate. Best results were obtained by acidified sodium chloride solution. In conclusion, developed cation exchange adsorbents comprising clay or zeolite support indicated adequate performance for Mg recovery under saline environment. Column design operated at the up-flow mode (approaching expanded bed) is appropriate for such type of separation. Preliminary cost indicators for Mg recovery via ion exchange have been developed and analyzed.

Keywords: batch and dynamic magnesium separation, seawater, polyacrylate hydrogel, cost evaluation

Procedia PDF Downloads 123
450 Positivity Rate of Person under Surveillance among Institut Jantung Negara’s Patients with Various Vaccination Statuses in the First Quarter of 2022, Malaysia

Authors: Mohd Izzat Md. Nor, Norfazlina Jaffar, Noor Zaitulakma Md. Zain, Nur Izyanti Mohd Suppian, Subhashini Balakrishnan, Geetha Kandavello

Abstract:

During the Coronavirus (COVID-19) pandemic, Malaysia has been focusing on building herd immunity by introducing vaccination programs into the community. Hospital Standard Operating Procedures (SOP) were developed to prevent inpatient transmission. Objective: In this study, we focus on the positivity rate of inpatient Person Under Surveillance (PUS) becoming COVID-19 positive and compare this to the National rate in order to see the outcomes of the patient who becomes COVID-19 positive in relation to their vaccination status. Methodology: This is a retrospective observational study carried out from 1 January until 30 March 2022 in Institut Jantung Negara (IJN). There were 5,255 patients admitted during the time of this study. Pre-admission Polymerase Chain Reaction (PCR) swab was done for all patients. Patients with positive PCR on pre-admission screening were excluded. The patient who had exposure to COVID-19-positive staff or patients during hospitalization was defined as PUS and were quarantined and monitored for potential COVID-19 infection. Their frequency and risk of exposure (WHO definition) were recorded. A repeat PCR swab was done for PUS patients that have clinical deterioration with or without COVID symptoms and on their last day of quarantine. The severity of COVID-19 infection was defined as category 1-5A. All patients' vaccination status was recorded, and they were divided into three groups: fully immunised, partially immunised, and unvaccinated. We analyzed the positivity rate of PUS patients becoming COVID-positive, outcomes, and correlation with the vaccination status. Result: Total inpatient PUS to patients and staff was 492; only 13 became positive, giving a positivity rate of 2.6%. Eight (62%) had multiple exposures. The majority, 8/13(72.7%), had a high-risk exposure, and the remaining 5 had medium-risk exposure. Four (30.8%) were boostered, 7(53.8%) were fully vaccinated, and 2(15.4%) were partial/unvaccinated. Eight patients were in categories 1-2, whilst 38% were in categories 3-5. Vaccination status did not correlate with COVID-19 Category (P=0.641). One (7.7%) patient died due to COVID-19 complications and sepsis. Conclusion: Within the first quarter of 2022, our institution's positivity rate (2.6%) is significantly lower than the country's (14.4%). High-risk exposure and multiple exposures to positive COVID-19 cases increased the risk of PUS becoming COVID-19 positive despite their underlying vaccination status.

Keywords: COVID-19, boostered, high risk, Malaysia, quarantine, vaccination status

Procedia PDF Downloads 76
449 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid

Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov

Abstract:

Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.

Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics

Procedia PDF Downloads 64
448 Structure Modification of Leonurine to Improve Its Potency as Aphrodisiac

Authors: Ruslin, R. E. Kartasasmita, M. S. Wibowo, S. Ibrahim

Abstract:

An aphrodisiac is a substance contained in food or drug that can arouse sexual instinct and increase pleasure while working, these substances derived from plants, animals, and minerals. When consuming substances that have aphrodisiac activity and duration can improve the sexual instinct. The natural aphrodisiac effect can be obtained through plants, animals, and minerals. Leonurine compound has aphrodisiac activity, these compounds can be isolated from plants of Leonurus Sp, Sundanese people is known as deundereman, this plant is empirical has aphrodisiac activity and based on the isolation of active compounds from plants known to contain compounds leonurine, so that the compound is expected to have activity aphrodisiac. Leonurine compound can be isolated from plants or synthesized chemically with material dasa siringat acid. Leonurine compound can be obtained commercial and derivatives of these compounds can be synthesized in an effort to increase its activity. This study aims to obtain derivatives leonurine better aphrodisiac activity compared with the parent compound, modified the structure of the compounds in the form leonurin guanidino butyl ester group with butyl amin and bromoetanol. ArgusLab program version 4.0.1 is used to determine the binding energy, hydrogen bonds and amino acids involved in the interaction of the compound PDE5 receptor. The in vivo test leonurine compounds and derivatives as an aphrodisiac ingredients and hormone testosterone levels using 27 male rats Wistar strain and 9 female mice of the same species, ages ranged from 12 weeks rats weighing + 200 g / tail. The test animal is divided into 9 groups according to the type of compounds and the dose given. Each treatment group was orally administered 2 ml per day for 5 days. On the sixth day was observed male rat sexual behavior and taking blood from the heart to measure testosterone levels using ELISA technique. Statistical analysis was performed in this study is the ANOVA test Least Square Differences (LSD) using the program Statistical Product and Service Solutions (SPSS). Aphrodisiac efficacy of the leonurine compound and its derivatives have proven in silico and in vivo test, the in silico testing leonurine derivatives have smaller binding energy derivatives leonurine so that activity better than leonurine compounds. Testing in vivo using rats of wistar strain that better leonurine derivative of this compound shows leonurine that in silico studies in parallel with in vivo tests. Modification of the structure in the form of guanidine butyl ester group with butyl amin and bromoethanol increase compared leonurine compound for aphrodisiac activity, testosterone derivatives of compounds leonurine experienced a significant improvement especial is 1RD compounds especially at doses of 100 and 150 mg/bb. The results showed that the compound leonurine and its compounds contain aphrodisiac activity and increase the amount of testosterone in the blood. The compound test used in this study acts as a steroid precursor resulting in increased testosterone.

Keywords: aphrodisiac dysfunction erectile leonurine 1-RD 2-RD, dysfunction, erectile leonurine, 1-RD 2-RD

Procedia PDF Downloads 263
447 Evaluation of NoSQL in the Energy Marketplace with GraphQL Optimization

Authors: Michael Howard

Abstract:

The growing popularity of electric vehicles in the United States requires an ever-expanding infrastructure of commercial DC fast charging stations. The U.S. Department of Energy estimates 33,355 publicly available DC fast charging stations as of September 2023. In 2017, 115,370 gasoline stations were operating in the United States, much more ubiquitous than DC fast chargers. Range anxiety is an important impediment to the adoption of electric vehicles and is even more relevant in underserved regions in the country. The peer-to-peer energy marketplace helps fill the demand by allowing private home and small business owners to rent their 240 Volt, level-2 charging facilities. The existing, publicly accessible outlets are wrapped with a Cloud-connected microcontroller managing security and charging sessions. These microcontrollers act as Edge devices communicating with a Cloud message broker, while both buyer and seller users interact with the framework via a web-based user interface. The database storage used by the marketplace framework is a key component in both the cost of development and the performance that contributes to the user experience. A traditional storage solution is the SQL database. The architecture and query language have been in existence since the 1970s and are well understood and documented. The Structured Query Language supported by the query engine provides fine granularity with user query conditions. However, difficulty in scaling across multiple nodes and cost of its server-based compute have resulted in a trend in the last 20 years towards other NoSQL, serverless approaches. In this study, we evaluate the NoSQL vs. SQL solutions through a comparison of Google Cloud Firestore and Cloud SQL MySQL offerings. The comparison pits Google's serverless, document-model, non-relational, NoSQL against the server-base, table-model, relational, SQL service. The evaluation is based on query latency, flexibility/scalability, and cost criteria. Through benchmarking and analysis of the architecture, we determine whether Firestore can support the energy marketplace storage needs and if the introduction of a GraphQL middleware layer can overcome its deficiencies.

Keywords: non-relational, relational, MySQL, mitigate, Firestore, SQL, NoSQL, serverless, database, GraphQL

Procedia PDF Downloads 35
446 Being an English Language Teaching Assistant in China: Understanding the Identity Evolution of Early-Career English Teacher in Private Tutoring Schools

Authors: Zhou Congling

Abstract:

The integration of private tutoring has emerged as an indispensable facet in the acquisition of language proficiency beyond formal educational settings. Notably, there has been a discernible surge in the demand for private English tutoring, specifically geared towards the preparation for internationally recognized gatekeeping examinations, such as IELTS, TOEFL, GMAT, and GRE. This trajectory has engendered an escalating need for English Language Teaching Assistants (ELTAs) operating within the realm of Private Tutoring Schools (PTSs). The objective of this study is to unravel the intricate process by which these ELTAs formulate their professional identities in the nascent stages of their careers as English educators, as well as to delineate their perceptions regarding their professional trajectories. The construct of language teacher identity is inherently multifaceted, shaped by an amalgamation of individual, societal, and cultural determinants, exerting a profound influence on how language educators navigate their professional responsibilities. This investigation seeks to scrutinize the experiential and influential factors that mold the identities of ELTAs in PTSs, particularly post the culmination of their language-oriented academic programs. Employing a qualitative narrative inquiry approach, this study aims to delve into the nuanced understanding of how ELTAs conceptualize their professional identities and envision their future roles. The research methodology involves purposeful sampling and the conduct of in-depth, semi-structured interviews with ten participants. Data analysis will be conducted utilizing Barkhuizen’s Short Story Analysis, a method designed to explore a three-dimensional narrative space, elucidating the intricate interplay of personal experiences and societal contexts in shaping the identities of ELTAs. The anticipated outcomes of this study are poised to contribute substantively to a holistic comprehension of ELTA identity formation, holding practical implications for diverse stakeholders within the private tutoring sector. This research endeavors to furnish insights into strategies for the retention of ELTAs and the enhancement of overall service quality within PTSs.

Keywords: China, English language teacher, narrative inquiry, private tutoring school, teacher identity

Procedia PDF Downloads 34
445 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 155
444 The Impact of Reducing Road Traffic Speed in London on Noise Levels: A Comparative Study of Field Measurement and Theoretical Calculation

Authors: Jessica Cecchinelli, Amer Ali

Abstract:

The continuing growth in road traffic and the resultant impact on the level of pollution and safety especially in urban areas have led local and national authorities to reduce traffic speed and flow in major towns and cities. Various boroughs of London have recently reduced the in-city speed limit from 30mph to 20mph mainly to calm traffic, improve safety and reduce noise and vibration. This paper reports the detailed field measurements using noise sensor and analyser and the corresponding theoretical calculations and analysis of the noise levels on a number of roads in the central London Borough of Camden where speed limit was reduced from 30mph to 20mph in all roads except the major routes of the ‘Transport for London (TfL)’. The measurements, which included the key noise levels and scales at residential streets and main roads, were conducted during weekdays and weekends normal and rush hours. The theoretical calculations were done according to the UK procedure ‘Calculation of Road Traffic Noise 1988’ and with conversion to the European L-day, L-evening, L-night, and L-den and other important levels. The current study also includes comparable data and analysis from previously measured noise in the Borough of Camden and other boroughs of central London. Classified traffic flow and speed on the roads concerned were observed and used in the calculation part of the study. Relevant data and description of the weather condition are reported. The paper also reports a field survey in the form of face-to-face interview questionnaires, which was carried out in parallel with the field measurement of noise, in order to ascertain the opinions and views of local residents and workers in the reduced speed zones of 20mph. The main findings are that the reduction in speed had reduced the noise pollution on the studied zones and that the measured and calculated noise levels for each speed zone are closely matched. Among the other findings was that of the field survey of the opinions and views of the local residents and workers in the reduced speed 20mph zones who supported the scheme and felt that it had improved the quality of life in their areas giving a sense of calmness and safety particularly for families with children, the elderly, and encouraged pedestrians and cyclists. The key conclusions are that lowering the speed limit in built-up areas would not just reduce the number of serious accidents but it would also reduce the noise pollution and promote clean modes of transport particularly walking and cycling. The details of the site observations and the corresponding calculations together with critical comparative analysis and relevant conclusions will be reported in the full version of the paper.

Keywords: noise calculation, noise field measurement, road traffic noise, speed limit in london, survey of people satisfaction

Procedia PDF Downloads 410
443 Journey to Inclusive School: Description of Crucial Sensitive Concepts in the Context of Situational Analysis

Authors: Denisa Denglerova, Radim Sip

Abstract:

Academic sources as well as international agreements and national documents define inclusion in terms of several criteria: equal opportunities, fulfilling individual needs, development of human resources, community participation. In order for these criteria to be met, the community must be cohesive. Community cohesion, which is a relatively new concept, is not determined by homogeneity, but by the acceptance of diversity among the community members and utilisation of its positive potential. This brings us to a central category of inclusion - appreciating diversity and using it to a positive effect. However, school diversity is a real phenomenon, which schools need to tackle more and more often. This is also indicated by the number of publications focused on diversity in schools. These sources present recent analyses of using identity as a tool of coping with the demands of a diversified society. The aim of this study is to identify and describe in detail the processes taking place in selected schools, which contribute to their pro-inclusive character. The research is designed around a multiple case study of three pro-inclusive schools. Paradigmatically speaking, the research is rooted in situational epistemology. This is also related to the overall framework of interpretation, for which we are going to use innovative methods of situational analysis. In terms of specific research outcomes this will manifest itself in replacing the idea of “objective theory” by the idea of “detailed cartography of a social world”. The cartographic approach directs both the logic of data collection and the choice of methods of their analysis and interpretation. The research results include detection of the following sensitive concepts: Key persons. All participants can contribute to promoting an inclusion-friendly environment; however, some do so with greater motivation than others. These could include school management, teachers with a strong vision of equality, or school counsellors. They have a significant effect on the transformation of the school, and are themselves deeply convinced that inclusion is necessary. Accordingly, they select suitable co-workers; they also inspire some of the other co-workers to make changes, leading by example. Employees with strongly opposing views gradually leave the school, and new members of staff are introduced to the concept of inclusion and openness from the beginning. Manifestations of school openness in working with diversity on all important levels. By this we mean positive manipulation with diversity both in the relationships between “traditional” school participants (directors, teachers, pupils) and school-parent relationships, or relationships between schools and the broader community, in terms of teaching methods as well as ways how the school culture affects the school environment. Other important detected concepts significantly helping to form a pro-inclusive environment in the school are individual and parallel classes; freedom and responsibility of both pupils and teachers, manifested on the didactic level by tendencies towards an open curriculum; ways of asserting discipline in the school environment.

Keywords: inclusion, diversity, education, sensitive concept, situational analysis

Procedia PDF Downloads 176