Search results for: traffic noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2264

Search results for: traffic noise

74 Blending Synchronous with Asynchronous Learning Tools: Students’ Experiences and Preferences for Online Learning Environment in a Resource-Constrained Higher Education Situations in Uganda

Authors: Stephen Kyakulumbye, Vivian Kobusingye

Abstract:

Generally, World over, COVID-19 has had adverse effects on all sectors but with more debilitating effects on the education sector. After reactive lockdowns, education institutions that could continue teaching and learning had to go a distance mediated by digital technological tools. In Uganda, the Ministry of Education thereby issued COVID-19 Online Distance E-learning (ODeL) emergent guidelines. Despite such guidelines, academic institutions in Uganda and similar developing contexts with academically constrained resource environments were caught off-guard and ill-prepared to transform from face-to-face learning to online distance learning mode. Most academic institutions that migrated spontaneously did so with no deliberate tools, systems, strategies, or software to cause active, meaningful, and engaging learning for students. By experience, most of these academic institutions shifted to Zoom and WhatsApp and instead conducted online teaching in real-time than blended synchronous and asynchronous tools. This paper provides students’ experiences while blending synchronous and asynchronous content-creating and learning tools within a technological resource-constrained environment to navigate in such a challenging Uganda context. These conceptual case-based findings, using experience from Uganda Christian University (UCU), point at the design of learning activities with two certain characteristics, the enhancement of synchronous learning technologies with asynchronous ones to mitigate the challenge of system breakdown, passive learning to active learning, and enhances the types of presence (social, cognitive and facilitatory). The paper, both empirical and experiential in nature, uses online experiences from third-year students in Bachelor of Business Administration student lectured using asynchronous text, audio, and video created with Open Broadcaster Studio software and compressed with Handbrake, all open-source software to mitigate disk space and bandwidth usage challenges. The synchronous online engagements with students were a blend of zoom or BigBlueButton, to ensure that students had an alternative just in case one failed due to excessive real-time traffic. Generally, students report that compared to their previous face-to-face lectures, the pre-recorded lectures via Youtube provided them an opportunity to reflect on content in a self-paced manner, which later on enabled them to engage actively during the live zoom and/or BigBlueButton real-time discussions and presentations. The major recommendation is that lecturers and teachers in a resource-constrained environment with limited digital resources like the internet and digital devices should harness this approach to offer students access to learning content in a self-paced manner and thereby enabling reflective active learning through reflective and high-order thinking.

Keywords: synchronous learning, asynchronous learning, active learning, reflective learning, resource-constrained environment

Procedia PDF Downloads 139
73 Temperature-Dependent Post-Mortem Changes in Human Cardiac Troponin-T (cTnT): An Approach in Determining Postmortem Interval

Authors: Sachil Kumar, Anoop Kumar Verma, Wahid Ali, Uma Shankar Singh

Abstract:

Globally approximately 55.3 million people die each year. In the India there were 95 lakh annual deaths in 2013. The number of deaths resulted from homicides, suicides and unintentional injuries in the same period was about 5.7 lakh. The ever-increasing crime rate necessitated the development of methods for determining time since death. An erroneous time of death window can lead investigators down the wrong path or possibly focus a case on an innocent suspect. In this regard a research was carried out by analyzing the temperature dependent degradation of a Cardiac Troponin-T protein (cTnT) in the myocardium postmortem as a marker for time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (in the Department of Forensic Medicine and Toxicology, King George’s Medical University, Lucknow India) after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC), 12 0C, 25 0C and 37 0C for different time periods ((~5, 26, 50, 84, 132, 157, 180, 205, and 230 hours). The cases included were the subjects of road traffic accidents (RTA) without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. The data shows a distinct temporal profile corresponding to the degradation of cTnT by proteases found in cardiac muscle. The disappearance of intact cTnT and the appearance of lower molecular weight bands are easily observed. Western blot data clearly showed the intact protein at 42 kDa, two major (27 kDa, 10kDa) fragments, two additional minor fragments (32 kDa) and formation of low molecular weight fragments as time increases. At 12 0C the intensity of band (intact cTnT) decreased steadily as compared to RT, 25 0C and 37 0C. Overall, both PMI and temperature had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 38 h and at the highest temperature, 37 0C. The combination of high temperature (37 0C) and long Postmortem interval (105.15 hrs) had the most drastic effect on the breakdown of cTnT. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the log of the time postmortem. These plots show a good coefficient of correlation of r = 0.95 (p=0.003) for the regression of the human heart at different temperature conditions. The data presented demonstrates that this technique can provide an extended time range during which Postmortem interval can be more accurately estimated.

Keywords: degradation, postmortem interval, proteolysis, temperature, troponin

Procedia PDF Downloads 386
72 Loading by Number Strategy for Commercial Vehicles

Authors: Ramalan Musa Yerima

Abstract:

The paper titled “loading by number” explained a strategy developed recently by Zonal Commanding Officer of the Federal Road Safety Corps of Nigeria, covering Sokoto, Kebbi and Zamfara States of Northern Nigeria. The strategy is aimed at reducing competition, which will invariably leads to the reduction in speed, reduction in dangerous driving, reduction in crash rate, reduction in injuries, reduction in property damages and reduction in death through road traffic crashes (RTC). This research paper presents a study focused on enhancing the safety of commercial vehicles. The background of this study highlights the alarming statistics related to commercial vehicle crashes in Nigeria with focus on Sokoto, Kebbi and Zamfara States, which often result in significant damage to property, loss of lives, and economic costs. The significance and aims is to investigate and propose effective strategy to enhance the safety of commercial vehicles. The study recognizes the pressing need for heightened safety measures in commercial transportation, as it impacts not only the well-being of drivers and passengers but also the overall public safety. To achieve the objectives, an examination of accident data, including causes and contributing factors, was performed to identify critical areas for improvement. The major finding of the study reveals that when competition comes into play within the realm of commercial driving, it has detrimental effects on road safety and resource management. Commercial drivers are pushed to complete their routes quickly, deliver goods on time or they pushed themselves to arrive quickly for more passengers and new contracts. This competitive environment, fuelled by internal and external pressures such as tight deadlines, poverty and greed, often leads to sad endings. The study recommend that if a strategy called loading by number is integrated with other multiple safety measures such as driver training programs, regulatory enforcement, and infrastructure improvements, commercial vehicle safety can be significantly enhanced. "Loading by Number” approach is design to ensure that the sequence of departure of drivers from motor park ‘A’ would be communicated to motor park officials of park ‘B’, which would be considered sequentially when giving them returning passengers, regardless of the first to arrive. In conclusion, this paper underscores the significance of improving the safety measures of commercial vehicles, as they are often larger and heavier than other vehicles on the road. Whenever they are involved in accidents, the consequences can be more severe. Commercial vehicles are also frequently involved in long-haul or interstate transportation, which means they cover longer distances and spend more time on the road. This increased exposure to driving conditions increases the probability of accidents occurring. By implementing the suggested measures, policymakers, transportation authorities, and industry stakeholders can work collectively towards ensuring a safer commercial transportation system.

Keywords: commercial, safety, strategy, transportation

Procedia PDF Downloads 62
71 Strategy of Loading by Number for Commercial Vehicles

Authors: Ramalan Musa Yerima

Abstract:

The paper titled “Loading by number” explained a strategy developed recently by the Zonal Commanding Officer of the Federal Road Safety Corps of Nigeria, covering Sokoto, Kebbi and Zamfara States of Northern Nigeria. The strategy is aimed at reducing competition, which will invariably lead to a reduction in speed, reduction in dangerous driving, reduction in crash rate, reduction in injuries, reduction in property damages and reduction in death through road traffic crashes (RTC). This research paper presents a study focused on enhancing the safety of commercial vehicles. The background of this study highlights the alarming statistics related to commercial vehicle crashes in Nigeria with a focus on Sokoto, Kebbi and Zamfara States, which often result in significant damage to property, loss of lives, and economic costs. The significance and aims is to investigate and propose an effective strategy to enhance the safety of commercial vehicles. The study recognizes the pressing need for heightened safety measures in commercial transportation, as it impacts not only the well-being of drivers and passengers but also the overall public safety. To achieve the objectives, an examination of accident data, including causes and contributing factors, was performed to identify critical areas for improvement. The major finding of the study reveals that when competition comes into play within the realm of commercial driving, it has detrimental effects on road safety and resource management. Commercial drivers are pushed to complete their routes quickly and deliver goods on time, or they push themselves to arrive quickly for more passengers and new contracts. This competitive environment, fuelled by internal and external pressures such as tight deadlines, poverty and greed, often leads to sad endings. The study recommends that if a strategy called loading by number is integrated with other multiple safety measures, such as driver training programs, regulatory enforcement, and infrastructure improvements, commercial vehicle safety can be significantly enhanced. "Loading by Number” approach is designed to ensure that the sequence of departure of drivers from the motor park ‘A’ would be communicated to motor park officials of park ‘B’, which would be considered sequentially when giving them returning passengers, regardless of the first to arrive. In conclusion, this paper underscores the significance of improving the safety measures of commercial vehicles, as they are often larger and heavier than other vehicles on the road. Whenever they are involved in accidents, the consequences can be more severe. Commercial vehicles are also frequently involved in long-haul or interstate transportation, which means they cover longer distances and spend more time on the road. This increased exposure to driving conditions increases the probability of accidents occurring. By implementing the suggested measures, policymakers, transportation authorities, and industry stakeholders can work collectively toward ensuring a safer commercial transportation system.

Keywords: commercial, safety, strategy, transport

Procedia PDF Downloads 63
70 Assessing Sustainability of Bike Sharing Projects Using Envision™ Rating System

Authors: Tamar Trop

Abstract:

Bike sharing systems can be important elements of smart cities as they have the potential for impact on multiple levels. These systems can add a significant alternative to other modes of mass transit in cities that are continuously looking for measures to become more livable and maintain their attractiveness for citizens, businesses and tourism. Bike-sharing began in Europe in 1965, and a viable format emerged in the mid-2000s thanks to the introduction of information technology. The rate of growth in bike-sharing schemes and fleets has been very rapid since 2008 and has probably outstripped growth in every other form of urban transport. Today, public bike-sharing systems are available on five continents, including over 700 cities, operating more than 800,000 bicycles at approximately 40,000 docking stations. Since modern bike sharing systems have become prevalent only in the last decade, the existing literature analyzing these systems and their sustainability is relatively new. The purpose of the presented study is to assess the sustainability of these newly emerging transportation systems, by using the Envision™ rating system as a methodological framework and the Israeli 'Tel -O-Fun' – bike sharing project as a case study. The assessment was conducted by project team members. Envision™ is a new guidance and rating system used to assess and improve the sustainability of all types and sizes of infrastructure projects. This tool provides a holistic framework for evaluating and rating the community, environmental, and economic benefits of infrastructure projects over the course of their life cycle. This evaluation method has 60 sustainability criteria divided into five categories: Quality of life, leadership, resource allocation, natural world, and climate and risk. 'Tel -O-Fun' project was launched in Tel Aviv-Yafo on 2011 and today provides about 1,800 bikes for rent, at 180 rental stations across the city. The system is based on a complex computer terminal that is located in the docking stations. The highest-rated sustainable features that the project scored include: (a) Improving quality of life by: offering a low cost and efficient form of public transit, improving community mobility and access, enabling the flexibility of travel within a multimodal transportation system, saving commuters time and money, enhancing public health and reducing air and noise pollution; (b) improving resource allocation by: offering inexpensive and flexible last-mile connectivity, reducing space, materials and energy consumption, reducing wear and tear on public roads, and maximizing the utility of existing infrastructure, and (c) reducing of greenhouse gas emissions from transportation. Overall, 'Tel -O-Fun' project was highly scored as an environmentally sustainable and socially equitable infrastructure. The use of this practical framework for evaluation also yielded various interesting insights on the shortcoming of the system and the characteristics of good solutions. This can contribute to the improvement of the project and may assist planners and operators of bike sharing systems to develop a sustainable, efficient and reliable transportation infrastructure within smart cities.

Keywords: bike sharing, Envision™, sustainability rating system, sustainable infrastructure

Procedia PDF Downloads 340
69 A Case Report on Cognitive-Communication Intervention in Traumatic Brain Injury

Authors: Nikitha Francis, Anjana Hoode, Vinitha George, Jayashree S. Bhat

Abstract:

The interaction between cognition and language, referred as cognitive-communication, is very intricate, involving several mental processes such as perception, memory, attention, lexical retrieval, decision making, motor planning, self-monitoring and knowledge. Cognitive-communication disorders are difficulties in communicative competencies that result from underlying cognitive impairments of attention, memory, organization, information processing, problem solving, and executive functions. Traumatic brain injury (TBI) is an acquired, non - progressive condition, resulting in distinct deficits of cognitive communication abilities such as naming, word-finding, self-monitoring, auditory recognition, attention, perception and memory. Cognitive-communication intervention in TBI is individualized, in order to enhance the person’s ability to process and interpret information for better functioning in their family and community life. The present case report illustrates the cognitive-communicative behaviors and the intervention outcomes of an adult with TBI, who was brought to the Department of Audiology and Speech Language Pathology, with cognitive and communicative disturbances, consequent to road traffic accident. On a detailed assessment, she showed naming deficits along with perseverations and had severe difficulty in recalling the details of the accident, her house address, places she had visited earlier, names of people known to her, as well as the activities she did each day, leading to severe breakdowns in her communicative abilities. She had difficulty in initiating, maintaining and following a conversation. She also lacked orientation to time and place. On administration of the Manipal Manual of Cognitive Linguistic Abilities (MMCLA), she exhibited poor performance on tasks related to visual and auditory perception, short term memory, working memory and executive functions. She attended 20 sessions of cognitive-communication intervention which followed a domain-general, adaptive training paradigm, with tasks relevant to everyday cognitive-communication skills. Compensatory strategies such as maintaining a dairy with reminders of her daily routine, names of people, date, time and place was also recommended. MMCLA was re-administered and her performance in the tasks showed significant improvements. Occurrence of perseverations and word retrieval difficulties reduced. She developed interests to initiate her day-to-day activities at home independently, as well as involve herself in conversations with her family members. Though she lacked awareness about her deficits, she actively involved herself in all the therapy activities. Rehabilitation of moderate to severe head injury patients can be done effectively through a holistic cognitive retraining with a focus on different cognitive-linguistic domains. Selection of goals and activities should have relevance to the functional needs of each individual with TBI, as highlighted in the present case report.

Keywords: cognitive-communication, executive functions, memory, traumatic brain injury

Procedia PDF Downloads 347
68 The Development of Home-Based Long Term Care Model among Thai Elderly Dependent

Authors: N. Uaphongsathorn, C. Worawong, S. Thaewpia

Abstract:

Background and significance: The population is aging in Thai society, the elderly dependent is at great risk of various functional, psychological, and socio-economic problems as well as less access to health care. They may require long term care at home to maximize their functional abilities and activities of daily living and to improve their quality of life during their own age. Therefore, there is a need to develop a home-based long term care to meet the long term care needs of elders dependent. Methods: The research purpose was to develop long term care model among the elderly dependent in Chaiyaphum province in Northeast region of Thailand. Action Research which is composing of planning, action, observation, and reflection phases was used. Research was carried out for 12 months in all sub-districts of 6 districts in Chaiyaphum province. Participants (N = 1,010) participating in the processes of model development were comprised of 3 groups: a) a total of 110 health care professionals, b) a total of 600 health volunteers and family caregivers and c) a total of 300 the elderly dependent with chronically medical illnesses or disabilities. Descriptive statistics and content analysis were used to analyze data. Findings: Results have shown that the most common health problems among elders dependent with physical disabilities to function independently were cardiovascular disease, dementia, and traffic injuries. The development of home-based long term care model among elders dependent in Chaiyaphum province was composed of six key steps. They are: a) initiating policies supporting formal and informal caregivers for the elder dependent in all sub-districts, b) building network and multidisciplinary team, c) developing 3-day care manager training program and 3-day care provider training program d) training case managers and care providers for the elderly dependent through team and action learning, e) assessing, planning and providing care based on care individual’s needs of the elderly dependent, and f) sharing experiences for good practice and innovation for long term care at homes in district urban and rural areas. Among all care managers and care providers, the satisfaction level for training programs was high with a mean score of 3.98 out of 5. The elders dependent and family caregivers addressed that long term care at home could contribute to improving life’s daily activities, family relationship, health status, and quality of life. Family caregivers and volunteers have feeling a sense of personal satisfaction and experiencing providing meaningful care and support for elders dependent. Conclusion: In conclusion, a home-based long term care is important to Thai elders dependent. Care managers and care providers play a large role and responsibility to provide appropriate care to meet the elders’ needs in both urban and rural areas in Thai society. Further research could be rigorously studied with a larger group of populations in similar socio-economic and cultural contexts.

Keywords: elderly people, care manager, care provider, long term care

Procedia PDF Downloads 302
67 Interactively Developed Capabilities for Environmental Management Systems: An Exploratory Investigation of SMEs

Authors: Zhuang Ma, Zihan Zhang, Yu Li

Abstract:

Environmental concerns from stakeholders (e.g., governments & customers) have pushed firms to integrate environmental management systems into business processes such as R&D, manufacturing, and marketing. Environmental systems include managing environmental risks and pollution control (e.g., air pollution control, waste-water treatment, noise control, energy recycling & solid waste treatment) through raw material management, the elimination and reduction of contaminants, recycling, and reuse in firms' operational processes. Despite increasing studies on firms' proactive adoption of environmental management, their focus is primarily on large corporations operating in developed economies. Investigations in the environmental management efforts of small and medium-sized enterprises (SMEs) are scarce. This is problematic for SMEs because, unlike large corporations, SMEs have limited awareness, resources, capabilities to adapt their operational routines to address environmental impacts. The purpose of this study is to explore how SMEs develop organizational capabilities through interactions with business partners (e.g., environmental management specialists & customers). Drawing on the resource-based view (RBV) and an organizational capabilities perspective, this study investigates the interactively developed capabilities that allow SMEs to adopt environmental management systems. Using an exploratory approach, the study includes 12 semi-structured interviews with senior managers from four SMEs, two environmental management specialists, and two customers in the pharmaceutical sector in Chongqing, China. Findings of this study include four key organizational capabilities: 1) ‘dynamic marketing’ capability, which allows SMEs to recoup the investments in environmental management systems by developing environmentally friendly products to address customers' ever-changing needs; 2) ‘process improvement’ capability, which allows SMEs to select and adopt the latest technologies from biology, chemistry, new material, and new energy sectors into the production system for improved environmental performance and cost-reductions; and 3) ‘relationship management’ capability which allows SMEs to improve corporate image among the public, social media, government agencies, and customers, who in turn help SMEs to overcome their competitive disadvantages. These interactively developed capabilities help SMEs to address larger competitors' foothold in the local market, reduce market constraints, and exploit competitive advantages in other regions (e.g., Guangdong & Jiangsu) of China. These findings extend the RBV and organizational capabilities perspective; that is, SMEs can develop the essential resources and capabilities required for environmental management through interactions with upstream and downstream business partners. While a limited number of studies did highlight the importance of interactions among SMEs, customers, suppliers, NGOs, industrial associations, and consulting firms, they failed to explore the specific capabilities developed through these interactions. Additionally, the findings can explain how a proactive adoption of environmental management systems could help some SMEs to overcome the institutional and market restraints on their products, thereby springboarding into larger, more environmentally demanding, yet more profitable markets compared with their existing market.

Keywords: capabilities, environmental management systems, interactions, SMEs

Procedia PDF Downloads 180
66 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach

Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: categorical data, log linear modeling, neural network, shifting cultivation

Procedia PDF Downloads 55
65 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 72
64 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump

Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani

Abstract:

The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.

Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump

Procedia PDF Downloads 136
63 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 162
62 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 290
61 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers

Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec

Abstract:

Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.

Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser

Procedia PDF Downloads 321
60 Office Workspace Design for Policewomen in Assam, India: Applications for Developing Countries

Authors: Shilpi Bora, Abhirup Chatterjee, Debkumar Chakrabarti

Abstract:

Organizations of all the sectors around the world are increasingly revisiting their workplace strategies with due concern for women working therein. Limited office space and rigid work arrangements contribute to lesser job satisfaction and greater work impoundments for any organization. Flexible workspace strategies are indispensable to accommodate the progressive rise of modular workstations and involvement of women. Today’s generation of employees deserves malleable office environments with employee-friendly job conditions and strategies. The workplace nowadays stands on rapid organizational changes in progressive and flexible work culture. Occupational well-being practices need to keep pace with the rapid changes in office-based work. Working at the office (workspace) with awkward postures or for long periods can cause pain, discomfort, and injury. The world is stirring towards the era of globalization and progress. The 4000 women police personnel constitute less than one per cent of the total police strength of India. Lots of innovative fields are growing fast, and it is important that we should accommodate women in those arenas. The timeworn trends should be set apart to set out for fresh opportunities and possibilities of development and success through more involvement of women in the workplace. The notion of women policing is gaining position throughout the world, and various countries are putting solemn efforts to mainstream women in policing. As the role of women policing in a society is budding, and thus it is also notable that the accessibility of women at general police stations should be considered. Accordingly, the impact of workspace at police station on the employee productivity has been widely deliberated as a crucial contributor to employee satisfaction leading to better functional motivation. Thus the present research aimed to look into the office workstation design of police station with reference to womanhood specific issues to uplift occupational wellbeing of the policewomen. Personal interview and individual responses collected through administering to a subjective assessment questionnaire on thirty women police as well as to have their views on these issues by purposive non-probability sampling of women police personnel of different ranks posted in Guwahati, Assam, India. Scrutiny of the collected data revealed that office design has a substantial impact on the policewomen job satisfaction in the police station. In this study, the workspace was designed in such a way that the set of factors would impact on the individual to ensure increased productivity. Office design such as furniture, noise, temperature, lighting and spatial arrangement were considered. The primary feature which affected the productivity of policewomen was the furniture used in the workspace, which was found to disturb the everyday and overall productivity of policewomen. Therefore, it was recommended to have proper and adequate ergonomics design intervention to improve the office design for better performance. This type of study is today’s need-of-the-hour to empower women and facilitate their inner talent to come up in service of the nation. The office workspace design also finds critical importance at several other occupations also – where office workstation needs further improvement.

Keywords: office workspace design, policewomen, womanhood concerns at workspace, occupational wellbeing

Procedia PDF Downloads 225
59 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 230
58 Sustainable Pavements with Reflective and Photoluminescent Properties

Authors: A.H. Martínez, T. López-Montero, R. Miró, R. Puig, R. Villar

Abstract:

An alternative to mitigate the heat island effect is to pave streets and sidewalks with pavements that reflect incident solar energy, keeping their surface temperature lower than conventional pavements. The “Heat island mitigation to prevent global warming by designing sustainable pavements with reflective and photoluminescent properties (RELUM) Project” has been carried out with this intention in mind. Its objective has been to develop bituminous mixtures for urban pavements that help in the fight against global warming and climate change, while improving the quality of life of citizens. The technology employed has focused on the use of reflective pavements, using bituminous mixes made with synthetic bitumens and light pigments that provide high solar reflectance. In addition to this advantage, the light surface colour achieved with these mixes can improve visibility, especially at night. In parallel and following the latter approach, an appropriate type of treatment has also been developed on bituminous mixtures to make them capable of illuminating at night, giving rise to photoluminescent applications, which can reduce energy consumption and increase road safety due to improved night-time visibility. The work carried out consisted of designing different bituminous mixtures in which the nature of the aggregate was varied (porphyry, granite and limestone) and also the colour of the mixture, which was lightened by adding pigments (titanium dioxide and iron oxide). The reflectance of each of these mixtures was measured, as well as the temperatures recorded throughout the day, at different times of the year. The results obtained make it possible to propose bituminous mixtures whose characteristics can contribute to the reduction of urban heat islands. Among the most outstanding results is the mixture made with synthetic bitumen, white limestone aggregate and a small percentage of titanium dioxide, which would be the most suitable for urban surfaces without road traffic, given its high reflectance and the greater temperature reduction it offers. With this solution, a surface temperature reduction of 9.7°C is achieved at the beginning of the night in the summer season with the highest radiation. As for luminescent pavements, paints with different contents of strontium aluminate and glass microspheres have been applied to asphalt mixtures, and the luminance of all the applications designed has been measured by exciting them with electric bulbs that simulate the effect of sunlight. The results obtained at this stage confirm the ability of all the designed dosages to emit light for a certain time, varying according to the proportions used. Not only the effect of the strontium aluminate and microsphere content has been observed, but also the influence of the colour of the base on which the paint is applied; the lighter the base, the higher the luminance. Ongoing studies are focusing on the evaluation of the durability of the designed solutions in order to determine their lifetime.

Keywords: heat island, luminescent paints, reflective pavement, temperature reduction

Procedia PDF Downloads 30
57 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 399
56 Technological Challenges for First Responders in Civil Protection; the RESPOND-A Solution

Authors: Georgios Boustras, Cleo Varianou Mikellidou, Christos Argyropoulos

Abstract:

Summer 2021 was marked by a number of prolific fires in the EU (Greece, Cyprus, France) as well as outside the EU (USA, Turkey, Israel). This series of dramatic events have stretched national civil protection systems and first responders in particular. Despite the introduction of National, Regional and International frameworks (e.g. rescEU), a number of challenges have arisen, not only related to climate change. RESPOND-A (funded by the European Commission by Horizon 2020, Contract Number 883371) introduces a unique five-tier project architectural structure for best associating modern telecommunications technology with novel practices for First Responders of saving lives, while safeguarding themselves, more effectively and efficiently. The introduced architecture includes Perception, Network, Processing, Comprehension, and User Interface layers, which can be flexibly elaborated to support multiple levels and types of customization, so, the intended technologies and practices can adapt to any European Environment Agency (EEA)-type disaster scenario. During the preparation of the RESPOND-A proposal, some of our First Responder Partners expressed the need for an information management system that could boost existing emergency response tools, while some others envisioned a complete end-to-end network management system that would offer high Situational Awareness, Early Warning and Risk Mitigation capabilities. The intuition behind these needs and visions sits on the long-term experience of these Responders, as well, their smoldering worry that the evolving threat of climate change and the consequences of industrial accidents will become more frequent and severe. Three large-scale pilot studies are planned in order to illustrate the capabilities of the RESPOND-A system. The first pilot study will focus on the deployment and operation of all available technologies for continuous communications, enhanced Situational Awareness and improved health and safety conditions for First Responders, according to a big fire scenario in a Wildland Urban Interface zone (WUI). An important issue will be examined during the second pilot study. Unobstructed communication in the form of the flow of information is severely affected during a crisis; the flow of information between the wider public, from the first responders to the public and vice versa. Call centers are flooded with requests and communication is compromised or it breaks down on many occasions, which affects in turn – the effort to build a common operations picture for all firstr esponders. At the same time the information that reaches from the public to the operational centers is scarce, especially in the aftermath of an incident. Understandably traffic if disrupted leaves no other way to observe but only via aerial means, in order to perform rapid area surveys. Results and work in progress will be presented in detail and challenges in relation to civil protection will be discussed.

Keywords: first responders, safety, civil protection, new technologies

Procedia PDF Downloads 142
55 A Novel PWM/PFM Controller for PSR Fly-Back Converter Using a New Peak Sensing Technique

Authors: Sanguk Nam, Van Ha Nguyen, Hanjung Song

Abstract:

For low-power applications such as adapters for portable devices and USB chargers, the primary side regulation (PSR) fly-back converter is widely used in lieu of the conventional fly-back converter using opto-coupler because of its simpler structure and lower cost. In the literature, there has been studies focusing on the design of PSR circuit; however, the conventional sensing method in PSR circuit using RC delay has a lower accuracy as compared to the conventional fly-back converter using opto-coupler. In this paper, we propose a novel PWM/PFM controller using new sensing technique for the PSR fly-back converter which can control an accurate output voltage. The conventional PSR circuit can sense the output voltage information from the auxiliary winding to regulate the duty cycle of the clock that control the output voltage. In the sensing signal waveform, there has two transient points at time the voltage equals to Vout+VD and Vout, respectively. In other to sense the output voltage, the PSR circuit must detect the time at which the current of the diode at the output equals to zero. In the conventional PSR flyback-converter, the sensing signal at this time has a non-sharp-negative slope that might cause a difficulty in detecting the output voltage information since a delay of sensing signal or switching clock may exist which brings out an unstable operation of PSR fly-back converter. In this paper instead of detecting output voltage at a non-sharp-negative slope, a sharp-positive slope is used to sense the proper information of the output voltage. The proposed PRS circuit consists of a saw-tooth generator, a summing circuit, a sample and hold circuit and a peak detector. Besides, there is also the start-up circuit which protects the chip from high surge current when the converter is turned on. Additionally, to reduce the standby power loss, a second mode which operates in a low frequency is designed beside the main mode at high frequency. In general, the operation of the proposed PSR circuit can be summarized as following: At the time the output information is sensed from the auxiliary winding, a saw-tooth signal from the saw-tooth generator is generated. Then, both of these signals are summed using a summing circuit. After this process, the slope of the peak of the sensing signal at the time diode current is zero becomes positive and sharp that make the peak easy to detect. The output of the summing circuit then is fed into a peak detector and the sample and hold circuit; hence, the output voltage can be properly sensed. By this way, we can sense more accurate output voltage information and extend margin even circuit is delayed or even there is the existence of noise by using only a simple circuit structure as compared with conventional circuits while the performance can be sufficiently enhanced. Circuit verification was carried out using 0.35μm 700V Magnachip process. The simulation result of sensing signal shows a maximum error of 5mV under various load and line conditions which means the operation of the converter is stable. As compared to the conventional circuit, we achieved very small error only used analog circuits compare with conventional circuits. In this paper, a PWM/PFM controller using a simple and effective sensing method for PSR fly-back converter has been presented in this paper. The circuit structure is simple as compared with the conventional designs. The gained results from simulation confirmed the idea of the design

Keywords: primary side regulation, PSR, sensing technique, peak detector, PWM/PFM control, fly-back converter

Procedia PDF Downloads 338
54 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network

Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu

Abstract:

Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.

Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning

Procedia PDF Downloads 130
53 Multi-Modality Brain Stimulation: A Treatment Protocol for Tinnitus

Authors: Prajakta Patil, Yash Huzurbazar, Abhijeet Shinde

Abstract:

Aim: To develop a treatment protocol for the management of tinnitus through multi-modality brain stimulation. Methodology: Present study included 33 adults with unilateral (31 subjects) and bilateral (2 subjects) chronic tinnitus with and/or without hearing loss independent of their etiology. The Treatment protocol included 5 consecutive sessions with follow-up of 6 months. Each session was divided into 3 parts: • Pre-treatment: a) Informed consent b) Pitch and loudness matching. • Treatment: Bimanual paper pen task with tinnitus masking for 30 minutes. • Post-treatment: a) Pitch and loudness matching b) Directive counseling and obtaining feedback. Paper-pen task is to be performed bimanually that included carrying out two different writing activities in different context. The level of difficulty of the activities was increased in successive sessions. Narrowband noise of a frequency same as that of tinnitus was presented at 10 dBSL of tinnitus for 30 minutes simultaneously in the ear with tinnitus. Result: The perception of tinnitus was no longer present in 4 subjects while in remaining subjects it reduced to an intensity that its perception no longer troubled them without causing residual facilitation. In all subjects, the intensity of tinnitus decreased by an extent of 45 dB at an average. However, in few subjects, the intensity of tinnitus also decreased by more than 45 dB. The approach resulted in statistically significant reductions in Tinnitus Functional Index and Tinnitus Handicap Inventory scores. The results correlate with pre and post treatment score of Tinnitus Handicap Inventory that dropped from 90% to 0%. Discussion: Brain mapping(qEEG) Studies report that there is multiple parallel overlapping of neural subnetworks in the non-auditory areas of the brain which exhibits abnormal, constant and spontaneous neural activity involved in the perception of tinnitus with each subnetwork and area reflecting a specific aspect of tinnitus percept. The paper pen task and directive counseling are designed and delivered respectively in a way that is assumed to induce normal, rhythmically constant and premeditated neural activity and mask the abnormal, constant and spontaneous neural activity in the above-mentioned subnetworks and the specific non-auditory area. Counseling was focused on breaking the vicious cycle causing and maintaining the presence of tinnitus. Diverting auditory attention alone is insufficient to reduce the perception of tinnitus. Conscious awareness of tinnitus can be suppressed when individuals engage in cognitively demanding tasks of non-auditory nature as the paper pen task used in the present study. To carry out this task selective, divided, sustained, simultaneous and split attention act cumulatively. Bimanual paper pen task represents a top-down activity which underlies brain’s ability to selectively attend to the bimanual written activity as a relevant stimulus and to ignore tinnitus that is the irrelevant stimuli in the present study. Conclusion: The study suggests that this novel treatment approach is cost effective, time saving and efficient to vanish the tinnitus or to reduce the intensity of tinnitus to a negligible level and thereby eliminating the negative reactions towards tinnitus.

Keywords: multi-modality brain stimulation, neural subnetworks, non-auditory areas, paper-pen task, top-down activity

Procedia PDF Downloads 147
52 A Semi-supervised Classification Approach for Trend Following Investment Strategy

Authors: Rodrigo Arnaldo Scarpel

Abstract:

Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.

Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation

Procedia PDF Downloads 89
51 Geotechnical Evaluation and Sizing of the Reinforcement Layer on Soft Soil in the Construction of the North Triage Road Clover, in Brasilia Federal District, Brazil

Authors: Rideci Farias, Haroldo Paranhos, Joyce Silva, Elson Almeida, Hellen Silva, Lucas Silva

Abstract:

The constant growth of the fleet of vehicles in the big cities, makes that the Engineering is dynamic, with respect to the new solutions for traffic flow in general. In the Federal District (DF), Brazil, it is no different. The city of Brasilia, Capital of Brazil, and Cultural Heritage of Humanity by UNESCO, is projected to 500 thousand inhabitants, and today circulates more than 3 million people in the city, and with a fleet of more than one vehicle for every two inhabitants. The growth of the city to the North region, made that the urban planning presented solutions for the fleet in constant growth. In this context, a complex of viaducts, road accesses, creation of new rolling roads and duplication of the Bragueto bridge over Paranoa lake in the northern part of the city was designed, giving access to the BR-020 highway, denominated Clover of North Triage (TTN). In the geopedological context, the region is composed of hydromorphic soils, with the presence of the water level at some times of the year. From the geotechnical point of view, are soils with SPT < 4 and Resistance not drained, Su < 50 kPa. According to urban planning in Brasília, special art works can not rise in the urban landscape, contrasting with the urban characteristics of the architects Lúcio Costa and Oscar Niemeyer. Architects hired to design the new Capital of Brazil. The urban criterion then created the technical impasse, resulting in the technical need to ‘bury’ the works of art and in turn the access greenhouses at different levels, in regions of low support soil and water level Outcrossing, generally inducing the need for this study and design. For the adoption of the appropriate solution, Standard Penetration Test (SPT), Vane Test, Diagnostic peritoneal lavage (DPL) and auger boring campaigns were carried out. With the comparison of the results of these tests, the profiles of resistance of the soils and water levels were created in the studied sections. Geometric factors such as existing sidewalks and lack of elevation for the discharge of deep drainage water have inhibited traditional techniques for total removal of soft soils, thus avoiding the use of temporary drawdown and shoring of excavations. Thus, a structural layer was designed to reinforce the subgrade by means of the ‘needling’ of the soft soil, without the need for longitudinal drains. In this context, the article presents the geological and geotechnical studies carried out, but also the dimensioning of the reinforcement layer on the soft soil with a view to the main objective of this solution that is to allow the execution of the civil works without the interference in the roads in use, Execution of services in rainy periods, presentation of solution compatible with drainage characteristics and soft soil reinforcement.

Keywords: layer, reinforcement, soft soil, clover of north triage

Procedia PDF Downloads 226
50 Investigation of Yard Seam Workings for the Proposed Newcastle Light Rail Project

Authors: David L. Knott, Robert Kingsland, Alistair Hitchon

Abstract:

The proposed Newcastle Light Rail is a key part of the revitalisation of Newcastle, NSW and will provide a frequent and reliable travel option throughout the city centre, running from Newcastle Interchange at Wickham to Pacific Park in Newcastle East, a total of 2.7 kilometers in length. Approximately one-third of the route, along Hunter and Scott Streets, is subject to potential shallow underground mine workings. The extent of mining and seams mined is unclear. Convicts mined the Yard Seam and overlying Dudley (Dirty) Seam in Newcastle sometime between 1800 and 1830. The Australian Agricultural Company mined the Yard Seam from about 1831 to the 1860s in the alignment area. The Yard Seam was about 3 feet (0.9m) thick, and therefore, known as the Yard Seam. Mine maps do not exist for the workings in the area of interest and it was unclear if both or just one seam was mined. Information from 1830s geological mapping and other data showing shaft locations were used along Scott Street and information from the 1908 Royal Commission was used along Hunter Street to develop an investigation program. In addition, mining was encountered for several sites to the south of the alignment at depths of about 7 m to 25 m. Based on the anticipated depths of mining, it was considered prudent to assess the potential for sinkhole development on the proposed alignment and realigned underground utilities and to obtain approval for the work from Subsidence Advisory NSW (SA NSW). The assessment consisted of a desktop study, followed by a subsurface investigation. Four boreholes were drilled along Scott Street and three boreholes were drilled along Hunter Street using HQ coring techniques in the rock. The placement of boreholes was complicated by the presence of utilities in the roadway and traffic constraints. All the boreholes encountered the Yard Seam, with conditions varying from unmined coal to an open void, indicating the presence of mining. The geotechnical information obtained from the boreholes was expanded by using various downhole techniques including; borehole camera, borehole sonar, and downhole geophysical logging. The camera provided views of the rock and helped to explain zones of no recovery. In addition, timber props within the void were observed. Borehole sonar was performed in the void and provided an indication of room size as well as the presence of timber props within the room. Downhole geophysical logging was performed in the boreholes to measure density, natural gamma, and borehole deviation. The data helped confirm that all the mining was in the Yard Seam and that the overlying Dudley Seam had been eroded in the past over much of the alignment. In summary, the assessment allowed the potential for sinkhole subsidence to be assessed and a mitigation approach developed to allow conditional approval by SA NSW. It also confirmed the presence of mining in the Yard Seam, the depth to the seam and mining conditions, and indicated that subsidence did not appear to have occurred in the past.

Keywords: downhole investigation techniques, drilling, mine subsidence, yard seam

Procedia PDF Downloads 314
49 E-Governance: A Key for Improved Public Service Delivery

Authors: Ayesha Akbar

Abstract:

Public service delivery has witnessed a significant improvement with the integration of information communication technology (ICT). It not only improves management structure with advanced technology for surveillance of service delivery but also provides evidence for informed decisions and policy. Pakistan’s public sector organizations have not been able to produce some good results to ensure service delivery. Notwithstanding, some of the public sector organizations in Pakistan has diffused modern technology and proved their credence by providing better service delivery standards. These good indicators provide sound basis to integrate technology in public sector organizations and shift of policy towards evidence based policy making. Rescue-1122 is a public sector organization which provides emergency services and proved to be a successful model for the provision of service delivery to save human lives and to ensure human development in Pakistan. The information about the organization has been received by employing qualitative research methodology. The information is broadly based on primary and secondary sources which includes Rescue-1122 website, official reports of organizations; UNDP (United Nation Development Program), WHO (World Health Organization) and by conducting 10 in-depth interviews with the high administrative staff of organizations who work in the Lahore offices. The information received has been incorporated with the study for the better understanding of the organization and their management procedures. Rescue-1122 represents a successful model in delivering the services in an efficient way to deal with the disaster management. The management of Rescue has strategized the policies and procedures in such a way to develop a comprehensive model with the integration of technology. This model provides efficient service delivery as well as maintains the standards of the organization. The service delivery model of rescue-1122 works on two fronts; front-office interface and the back-office interface. Back-office defines the procedures of operations and assures the compliance of the staff whereas, front-office equipped with the latest technology and good infrastructure handles the emergency calls. Both ends are integrated with satellite based vehicle tracking, wireless system, fleet monitoring system and IP camera which monitors every move of the staff to provide better services and to pinpoint the distortions in the services. The standard time of reaching to the emergency spot is 7 minutes, and during entertaining the case; driver‘s behavior, traffic volume and the technical assistance being provided to the emergency case is being monitored by front-office. Then the whole information get uploaded to the main dashboard of Lahore headquarter from the provincial offices. The latest technology is being materialized by Rescue-1122 for delivering the efficient services, investigating the flaws; if found, and to develop data to make informed decision making. The other public sector organizations of Pakistan can also develop such models to integrate technology for improving service delivery and to develop evidence for informed decisions and policy making.

Keywords: data, e-governance, evidence, policy

Procedia PDF Downloads 247
48 Structural, Spectral and Optical Properties of Boron-Aluminosilicate Glasses with High Dy₂O₃ and Er₂O₃ Content for Faraday Rotator Operating at 2µm

Authors: Viktor D. Dubrovin, Masoud Mollaee, Jie Zong, Xiushan Zhu, Nasser Peyghambarian

Abstract:

Glasses doped with high rare-earth (RE) elements concentration attracted considerable attention since the middle of the 20th century due to their particular magneto-optical properties. Such glasses exhibit the Faraday effect in which the polarization plane of a linearly polarized light beam is rotated by the interaction between the incident light and the magneto-optical material. That effect found application in optical isolators that are useful for laser systems, which can prevent back reflection of light into lasers or optical amplifiers and reduce signal instability and noise. Glasses are of particular interest since they are cost-effective and can be formed into fibers, thus breaking the limits of traditional bulk optics requiring optical coupling for use with fiber-optic systems. The advent of high-power fiber lasers operating near 2µm revealed a necessity in the development of all fiber isolators for this region. Ce³⁺, Pr³⁺, Dy³⁺, and Tb³⁺ ions provide the biggest contribution to the Verdet constant value of optical materials among the RE. It is known that Pr³⁺ and Tb³⁺ ions have strong absorption bands near 2 µm, thus making Dy³⁺ and Ce³⁺ the only prospective candidates for fiber isolator operating in that region. Due to the high tendency of Ce³⁺ ions pass to Ce⁴⁺ during the synthesis, glasses with high cerium content usually suffers from Ce⁴⁺ ions absorption extending from visible to IR. Additionally, Dy³⁺ (₆H¹⁵/²) same as Ho³⁺ (⁵I₈) ions, have the largest effective magnetic moment (µeff = 10.6 µB) among the RE ions that starts to play the key role if the operating region is far from 4fⁿ→ 4fⁿ⁻¹5 d¹ electric-dipole transition relevant to the Faraday Effect. Considering the high effective magnetic moment value of Er³⁺ ions (µeff = 9.6 µB) that is 3rd after Dy³⁺/ Ho³⁺ and Tb³⁺, it is possible to assume that Er³⁺ doped glasses should exhibit Verdet constant value near 2µm that is comparable with one of Dy doped glasses. Thus, partial replacement of Dy³⁺ on Er³⁺ ions has been performed, keeping the overall concentration of Re₂O₃ equal to 70 wt.% (30.6 mol.%). Al₂O₃-B₂O₃-SiO₂-30.6RE₂O₃ (RE= Er, Dy) glasses had been synthesized, and their thermal, spectral, optical, structural, and magneto-optical properties had been studied. Glasses synthesis had been conducted in Pt crucibles for 3h at 1500 °C. The obtained melt was poured into preheated up to 400 °C mold and annealed from 800 oC to room temperature for 12h with 1h dwell. The mass of obtained glass samples was about 200g. Shown that the difference between crystallization and glass transition temperature is about 150 oC, even taking into account the fact that high content of RE₂O₃ leads to glass network depolymerization. Verdet constant of Al₂O₃-B₂O₃-SiO₂-30.6RE₂O₃ glasses for wavelength 1950 nm can reach more than 5.9 rad/(T*m), which is among the highest number reported for a paramagnetic glass at this wavelength. The refractive index value was found to be equal to 1.7545 at 633 nm. Our experimental results show that Al₂O₃-B₂O₃-SiO₂-30.6RE₂O₃ glasses with high Dy₂O₃ content are expected to be promising material for use as highly effective Faraday isolators and modulators of electromagnetic radiation in the 2μm region.

Keywords: oxide glass, magneto-optical, dysprosium, erbium, Faraday rotator, boron-aluminosilicate system

Procedia PDF Downloads 114
47 The Effect of Photochemical Smog on Respiratory Health Patients in Abuja Nigeria

Authors: Christabel Ihedike, John Mooney, Monica Price

Abstract:

Summary: This study aims to critically evaluate effect of photochemical smog on respiratory health in Nigeria. Cohort of chronic obstructive pulmonary disease (COPD) patients was recruited from two large hospitals in Abuja Nigeria. Respiratory health questionnaires, daily diaries, dyspnoea scale and lung function measurement were used to obtain health data and investigate the relationship with air quality data (principally ozone, NOx and particulate pollution). Concentrations of air pollutants were higher than WHO and Nigerian air quality standard. The result suggests a correlation between measured air quality and exacerbation of respiratory illness. Introduction: Photochemical smog is a significant health challenge in most cities and its effect on respiratory health is well acknowledged. This type of pollution is most harmful to the elderly, children and those with underlying respiratory disease. This study aims to investigate impact of increasing temperature and photo-chemically generated secondary air pollutants on respiratory health in Abuja Nigeria. Method and Result: Health data was collected using spirometry to measure lung function on routine attendance at the clinic, daily diaries kept by patients and information obtained using respiratory questionnaire. Questionnaire responses (obtained using an adapted and internally validated version of St George’s Hospital Respiratory Questionnaire), shows that ‘time of wheeze’ showed an association with participants activities: 30% had worse wheeze in the morning: 10% cannot shop, 15% take long-time to get washed, 25% walk slower, 15% if hurry have to stop and 5% cannot take-bath. There was also a decrease in Forced expiratory volume in the first second and Forced Vital Capacity, and daily change in the afternoon–morning may be associated with the concentration level of pollutants. Also, dyspnoea symptoms recorded that 60% of patients were on grade 3, 25% grade 2 and 15% grade 1. Daily frequency of the number of patients in the cohort that cough /brought sputum is 78%. Air pollution in the city is higher than Nigerian and WHO standards with NOx and PM10 concentrations of 693.59ug/m-3 and 748ugm-3 being measured respectively. The result shows that air pollution may increase occurrence and exacerbation of respiratory disease. Conclusion: High temperature and local climatic conditions in urban Nigeria encourages formation of Ozone, the major constituent of photochemical smog, resulting also in the formation of secondary air pollutants associated with health challenges. In this study we confirm the likely potency of the pattern of secondary air pollution in exacerbating COPD symptoms in vulnerable patient group in urban Nigeria. There is need for better regulation and measures to reduce ozone, particularly when local climatic conditions favour development of photochemical smog in such settings. Climate change and likely increasing temperatures add impetus and urgency for better air quality standards and measures (traffic-restrictions and emissions standards) in developing world settings such as Nigeria.

Keywords: Abuja-Nigeria, effect, photochemical smog, respiratory health

Procedia PDF Downloads 224
46 Prevalence of Occupational Asthma Diagnosed by Specific Challenge Test in 5 Different Working Environments in Thailand

Authors: Sawang Saenghirunvattana, Chao Saenghirunvattana, Maria Christina Gonzales, Wilai Srimuk, Chitchamai Siangpro, Kritsana Sutthisri

Abstract:

Introduction: Thailand is one of the fastest growing countries in Asia. It has emerged from agricultural to industrialized economy. Work places have shifted from farms to factories, offices and streets were employees are exposed to certain chemicals and pollutants causing occupational diseases particularly asthma. Work-related diseases are major concern and many studies have been published to demonstrate certain professions and their exposures that elevate the risk of asthma. Workers who exhibit coughing, wheezing and difficulty of breathing are brought to a health care setting where Pulmonary Function Test (PFT) is performed and based from results, they are then diagnosed of asthma. These patients, known to have occupational asthma eventually get well when removed from the exposure of the environment. Our study, focused on performing PFT or specific challenge test in diagnosing workers of occupational asthma with them executing the test within their workplace, maintaining the environment and their daily exposure to certain levels of chemicals and pollutants. This has provided us with an understanding and reliable diagnosis of occupational asthma. Objective: To identify the prevalence of Thai workers who develop asthma caused by exposure to pollutants and chemicals from their working environment by conducting interview and performing PFT or specific challenge test in their work places. Materials and Methods: This study was performed from January-March 2015 in Bangkok, Thailand. The percentage of abnormal symptoms of 940 workers in 5 different areas (factories of plastic, fertilizer, animal food, office and streets) were collected through a questionnaire. The demographic information, occupational history, and the state of health were determined using a questionnaire and checklists. PFT was executed in their work places and results were measured and evaluated. Results: Pulmonary Function test was performed by 940 participants. The specific challenge test was done in factories of plastic, fertilizer, animal food, office environment and on the streets of Thailand. Of the 100 participants working in the plastic industry, 65% complained of having respiratory symptoms. None of them had an abnormal PFT. From the participants who worked with fertilizers and are exposed to sulfur dioxide, out of 200 participants, 20% complained of having symptoms and 8% had abnormal PFT. The 300 subjects working with animal food reported that 45% complained of respiratory symptoms and 15% had abnormal PFT results. From the office environment where there is indoor pollution, Out of 140 subjects, 7% had symptoms and 4% had abnormal PFT. The 200 workers exposed to traffic pollution, 24% reported respiratory symptoms and 12% had abnormal PFT. Conclusion: We were able to identify and diagnose participants of occupational asthma through their abnormal lung function test done at their work places. The chemical agents and exposures were determined therefore effective management of workers with occupational asthma were advised to avoid further exposure for better chances of recovery. Further studies identifying the risk factors and causative agents of asthma in workplaces should be developed to encourage interventional strategies and programs that will prevent occupation related diseases particularly asthma.

Keywords: occupational asthma, pulmonary function test, specific challenge test, Thailand

Procedia PDF Downloads 304
45 Hydrodynamic Characterisation of a Hydraulic Flume with Sheared Flow

Authors: Daniel Rowe, Christopher R. Vogel, Richard H. J. Willden

Abstract:

The University of Oxford’s recirculating water flume is a combined wave and current test tank with a 1 m depth, 1.1 m width, and 10 m long working section, and is capable of flow speeds up to 1 ms−1 . This study documents the hydrodynamic characteristics of the facility in preparation for experimental testing of horizontal axis tidal stream turbine models. The turbine to be tested has a rotor diameter of 0.6 m and is a modified version of one of two model-scale turbines tested in previous experimental campaigns. An Acoustic Doppler Velocimeter (ADV) was used to measure the flow at high temporal resolution at various locations throughout the flume, enabling the spatial uniformity and turbulence flow parameters to be investigated. The mean velocity profiles exhibited high levels of spatial uniformity at the design speed of the flume, 0.6 ms−1 , with variations in the three-dimensional velocity components on the order of ±1% at the 95% confidence level, along with a modest streamwise acceleration through the measurement domain, a target 5 m working section of the flume. A high degree of uniformity was also apparent for the turbulence intensity, with values ranging between 1-2% across the intended swept area of the turbine rotor. The integral scales of turbulence exhibited a far higher degree of variation throughout the water column, particularly in the streamwise and vertical scales. This behaviour is believed to be due to the high signal noise content leading to decorrelation in the sampling records. To achieve more realistic levels of vertical velocity shear in the flume, a simple procedure to practically generate target vertical shear profiles in open-channel flows is described. Here, the authors arranged a series of non-uniformly spaced parallel bars placed across the width of the flume and normal to the onset flow. By adjusting the resistance grading across the height of the working section, the downstream profiles could be modified accordingly, characterised by changes in the velocity profile power law exponent, 1/n. Considering the significant temporal variation in a tidal channel, the choice of the exponent denominator, n = 6 and n = 9, effectively provides an achievable range around the much-cited value of n = 7 observed at many tidal sites. The resulting flow profiles, which we intend to use in future turbine tests, have been characterised in detail. The results indicate non-uniform vertical shear across the survey area and reveal substantial corner flows, arising from the differential shear between the target vertical and cross-stream shear profiles throughout the measurement domain. In vertically sheared flow, the rotor-equivalent turbulence intensity ranges between 3.0-3.8% throughout the measurement domain for both bar arrangements, while the streamwise integral length scale grows from a characteristic dimension on the order of the bar width, similar to the flow downstream of a turbulence-generating grid. The experimental tests are well-defined and repeatable and serve as a reference for other researchers who wish to undertake similar investigations.

Keywords: acoustic doppler Velocimeter, experimental hydrodynamics, open-channel flow, shear profiles, tidal stream turbines

Procedia PDF Downloads 86