Search results for: maximum operating speed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8671

Search results for: maximum operating speed

1351 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach

Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist

Abstract:

Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.

Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.

Procedia PDF Downloads 75
1350 Inertial Motion Capture System for Biomechanical Analysis in Rehabilitation and Sports

Authors: Mario Sandro F. Rocha, Carlos S. Ande, Anderson A. Oliveira, Felipe M. Bersotti, Lucas O. Venzel

Abstract:

The inertial motion capture systems (mocap) are among the most suitable tools for quantitative clinical analysis in rehabilitation and sports medicine. The inertial measuring units (IMUs), composed by accelerometers, gyroscopes, and magnetometers, are able to measure spatial orientations and calculate displacements with sufficient precision for applications in biomechanical analysis of movement. Furthermore, this type of system is relatively affordable and has the advantages of portability and independence from external references. In this work, we present the last version of our inertial motion capture system, based on the foregoing technology, with a unity interface designed for rehabilitation and sports. In our hardware architecture, only one serial port is required. First, the board client must be connected to the computer by a USB cable. Next, an available serial port is configured and opened to establish the communication between the client and the application, and then the client starts scanning for the active MOCAP_S servers around. The servers play the role of the inertial measuring units that capture the movements of the body and send the data to the client, which in turn create a package composed by the ID of the server, the current timestamp, and the motion capture data defined in the client pre-configuration of the capture session. In the current version, we can measure the game rotation vector (grv) and linear acceleration (lacc), and we also have a step detector that can be abled or disabled. The grv data are processed and directly linked to the bones of the 3D model, and, along with the data of lacc and step detector, they are also used to perform the calculations of displacements and other variables shown on the graphical user interface. Our user interface was designed to calculate and present variables that are important for rehabilitation and sports, such as cadence, speed, total gait cycle, gait cycle length, obliquity and rotation, and center of gravity displacement. Our goal is to present a low-cost portable and wearable system with a friendly interface for application in biomechanics and sports, which also performs as a product of high precision and low consumption of energy.

Keywords: biomechanics, inertial sensors, motion capture, rehabilitation

Procedia PDF Downloads 139
1349 A Study of NT-ProBNP and ETCO2 in Patients Presenting with Acute Dyspnoea

Authors: Dipti Chand, Riya Saboo

Abstract:

OBJECTIVES: Early and correct diagnosis may present a significant clinical challenge in diagnosis of patients presenting to Emergency Department with Acute Dyspnoea. The common cause of acute dyspnoea and respiratory distress in Emergency Department are Decompensated Heart Failure (HF), Chronic Obstructive Pulmonary Disease (COPD), Asthma, Pneumonia, Acute Respiratory Distress Syndrome (ARDS), Pulmonary Embolism (PE), and other causes like anaemia. The aim of the study was to measure NT-pro Brain Natriuretic Peptide (BNP) and exhaled End-Tidal Carbon dioxide (ETCO2) in patients presenting with dyspnoea. MATERIAL AND METHODS: This prospective, cross-sectional and observational study was performed at the Government Medical College and Hospital, Nagpur, between October 2019 and October 2021 in patients admitted to the Medicine Intensive Care Unit. Three groups of patients were compared: (1) HFrelated acute dyspnoea group (n = 52), (2) pulmonary (COPD/PE)-related acute dyspnoea group (n = 31) and (3) sepsis with ARDS-related dyspnoea group (n = 13). All patients underwent initial clinical examination with a recording of initial vital parameters along with on-admission ETCO2 measurement, NT-proBNP testing, arterial blood gas analysis, lung ultrasound examination, 2D echocardiography, chest X-rays, and other relevant diagnostic laboratory testing. RESULTS: 96 patients were included in the study. Median NT-proBNP was found to be high for the Heart Failure group (11,480 pg/ml), followed by the sepsis group (780 pg/ml), and pulmonary group had an Nt ProBNP of 231 pg/ml. The mean ETCO2 value was maximum in the pulmonary group (48.610 mmHg) followed by Heart Failure (31.51 mmHg) and the sepsis group (19.46 mmHg). The results were found to be statistically significant (P < 0.05). CONCLUSION: NT-proBNP has high diagnostic accuracy in differentiating acute HF-related dyspnoea from pulmonary (COPD and ARDS)-related acute dyspnoea. The higher levels of ETCO2 help in diagnosing patients with COPD.

Keywords: NT PRO BNP, ETCO2, dyspnoea, lung USG

Procedia PDF Downloads 75
1348 A Photoredox (C)sp³-(C)sp² Coupling Method Comparison Study

Authors: Shasline Gedeon, Tiffany W. Ardley, Ying Wang, Nathan J. Gesmundo, Katarina A. Sarris, Ana L. Aguirre

Abstract:

Drug discovery and delivery involve drug targeting, an approach that helps find a drug against a chosen target through high throughput screening and other methods by way of identifying the physical properties of the potential lead compound. Physical properties of potential drug candidates have been an imperative focus since the unveiling of Lipinski's Rule of 5 for oral drugs. Throughout a compound's journey from discovery, clinical phase trials, then becoming a classified drug on the market, the desirable properties are optimized while minimizing/eliminating toxicity and undesirable properties. In the pharmaceutical industry, the ability to generate molecules in parallel with maximum efficiency is a substantial factor achieved through sp²-sp² carbon coupling reactions, e.g., Suzuki Coupling reactions. These reaction types allow for the increase of aromatic fragments onto a compound. More recent literature has found benefits to decreasing aromaticity, calling for more sp³-sp² carbon coupling reactions instead. The objective of this project is to provide a comparison between various sp³-sp² carbon coupling methods and reaction conditions, collecting data on production of the desired product. There were four different coupling methods being tested amongst three cores and 4-5 installation groups per method; each method ran under three distinct reaction conditions. The tested methods include the Photoredox Decarboxylative Coupling, the Photoredox Potassium Alkyl Trifluoroborate (BF3K) Coupling, the Photoredox Cross-Electrophile (PCE) Coupling, and the Weix Cross-Electrophile (WCE) Coupling. The results concluded that the Decarboxylative method was very difficult in yielding product despite the several literature conditions chosen. The BF3K and PCE methods produced competitive results. Amongst the two Cross-Electrophile coupling methods, the Photoredox method surpassed the Weix method on numerous accounts. The results will be used to build future libraries.

Keywords: drug discovery, high throughput chemistry, photoredox chemistry, sp³-sp² carbon coupling methods

Procedia PDF Downloads 142
1347 Optimization of Titanium Leaching Process Using Experimental Design

Authors: Arash Rafiei, Carroll Moore

Abstract:

Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.

Keywords: titanium leaching, optimization, experimental design, performance analysis

Procedia PDF Downloads 370
1346 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality

Authors: Christian T. Lystbaek

Abstract:

Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.

Keywords: public services innovation management, public value, co-creation, action research

Procedia PDF Downloads 278
1345 Climate Change and Dengue Transmission in Lahore, Pakistan

Authors: Sadia Imran, Zenab Naseem

Abstract:

Dengue fever is one of the most alarming mosquito-borne viral diseases. Dengue virus has been distributed over the years exponentially throughout the world be it tropical or sub-tropical regions of the world, particularly in the last ten years. Changing topography, climate change in terms of erratic seasonal trends, rainfall, untimely monsoon early or late and longer or shorter incidences of either summer or winter. Globalization, frequent travel throughout the world and viral evolution has lead to more severe forms of Dengue. Global incidence of dengue infections per year have ranged between 50 million and 200 million; however, recent estimates using cartographic approaches suggest this number is closer to almost 400 million. In recent years, Pakistan experienced a deadly outbreak of the disease. The reason could be that they have the maximum exposure outdoors. Public organizations have observed that changing climate, especially lower average summer temperature, and increased vegetation have created tropical-like conditions in the city, which are suitable for Dengue virus growth. We will conduct a time-series analysis to study the interrelationship between dengue incidence and diurnal ranges of temperature and humidity in Pakistan, Lahore being the main focus of our study. We have used annual data from 2005 to 2015. We have investigated the relationship between climatic variables and dengue incidence. We used time series analysis to describe temporal trends. The result shows rising trends of Dengue over the past 10 years along with the rise in temperature & rainfall in Lahore. Hence this seconds the popular statement that the world is suffering due to Climate change and Global warming at different levels. Disease outbreak is one of the most alarming indications of mankind heading towards destruction and we need to think of mitigating measures to control epidemic from spreading and enveloping the cities, countries and regions.

Keywords: Dengue, epidemic, globalization, climate change

Procedia PDF Downloads 232
1344 The Use of Additives to Prevent Fouling in Polyethylene and Polypropylene Gas and Slurry Phase Processes

Authors: L. Shafiq, A. Rigby

Abstract:

All polyethylene processes are highly exothermic, and the safe removal of the heat of reaction is a fundamental issue in the process design. In slurry and gas processes, the velocity of the polymer particles in the reactor and external coolers can be very high, and under certain conditions, this can lead to static charging of these particles. Such static charged polymer particles may start building up on the reactor wall, limiting heat transfer, and ultimately leading to severe reactor fouling and forced reactor shut down. Statsafe™ is an FDA approved anti-fouling additive currently used around the world for polyolefin production as an anti-fouling additive. The unique polymer chemistry aids static discharge, which prevents the build-up of charged polyolefin particles, which could lead to fouling. Statsafe™ is being used and trailed in gas, slurry, and a combination of these technologies around the world. We will share data to demonstrate how the use of Statsafe™ allows more stable operation at higher solids level by eliminating static, which would otherwise prevent closer packing of particles in the hydrocarbon slurry. Because static charge generation depends also on the concentration of polymer particles in the slurry, the maximum slurry concentration can be higher when using Statsafe™, leading to higher production rates. The elimination of fouling also leads to less downtime. Special focus will be made on the impact anti-static additives have on catalyst performance within the polymerization process and how this has been measured. Lab-scale studies have investigated the effect on the activity of Ziegler Natta catalysts when anti-static additives are used at various concentrations in gas and slurry, polyethylene and polypropylene processes. An in-depth gas phase study investigated the effect of additives on the final polyethylene properties such as particle size, morphology, fines, bulk density, melt flow index, gradient density, and melting point.

Keywords: anti-static additives, catalyst performance, FDA approved anti-fouling additive, polymerisation

Procedia PDF Downloads 200
1343 Renewable Natural Gas Production from Biomass and Applications in Industry

Authors: Sarah Alamolhoda, Kevin J. Smith, Xiaotao Bi, Naoko Ellis

Abstract:

For millennials, biomass has been the most important source of fuel used to produce energy. Energy derived from biomass is renewable by re-growth of biomass. Various technologies are used to convert biomass to potential renewable products including combustion, gasification, pyrolysis and fermentation. Gasification is the incomplete combustion of biomass in a controlled environment that results in valuable products such as syngas, biooil and biochar. Syngas is a combustible gas consisting of hydrogen (H₂), carbon monoxide (CO), carbon dioxide (CO₂), and traces of methane (CH₄) and nitrogen (N₂). Cleaned syngas can be used as a turbine fuel to generate electricity, raw material for hydrogen and synthetic natural gas production, or as the anode gas of solid oxide fuel cells. In this work, syngas as a product of woody biomass gasification in British Columbia, Canada, was introduced to two consecutive fixed bed reactors to perform a catalytic water gas shift reaction followed by a catalytic methanation reaction. The water gas shift reaction is a well-established industrial process and used to increase the hydrogen content of the syngas before the methanation process. Catalysts were used in the process since both reactions are reversible exothermic, and thermodynamically preferred at lower temperatures while kinetically favored at elevated temperatures. The water gas shift reactor and the methanation reactor were packed with Cu-based catalyst and Ni-based catalyst, respectively. Simulated syngas with different percentages of CO, H₂, CH₄, and CO₂ were fed to the reactors to investigate the effect of operating conditions in the unit. The water gas shift reaction experiments were done in the temperature of 150 ˚C to 200 ˚C, and the pressure of 550 kPa to 830 kPa. Similarly, methanation experiments were run in the temperature of 300 ˚C to 400 ˚C, and the pressure of 2340 kPa to 3450 kPa. The Methanation reaction reached 98% of CO conversion at 340 ˚C and 3450 kPa, in which more than half of CO was converted to CH₄. Increasing the reaction temperature caused reduction in the CO conversion and increase in the CH₄ selectivity. The process was designed to be renewable and release low greenhouse gas emissions. Syngas is a clean burning fuel, however by going through water gas shift reaction, toxic CO was removed, and hydrogen as a green fuel was produced. Moreover, in the methanation process, the syngas energy was transformed to a fuel with higher energy density (per volume) leading to reduction in the amount of required fuel that flows through the equipment and improvement in the process efficiency. Natural gas is about 3.5 times more efficient (energy/ volume) than hydrogen and easier to store and transport. When modification of existing infrastructure is not practical, the partial conversion of renewable hydrogen to natural gas (with up to 15% hydrogen content), the efficiency would be preserved while greenhouse gas emission footprint is eliminated.

Keywords: renewable natural gas, methane, hydrogen, gasification, syngas, catalysis, fuel

Procedia PDF Downloads 116
1342 Demonstration Operation of Distributed Power Generation System Based on Carbonized Biomass Gasification

Authors: Kunio Yoshikawa, Ding Lu

Abstract:

Small-scale, distributed and low-cost biomass power generation technologies are highly required in the modern society. There are big needs for these technologies in the disaster areas of developed countries and un-electrified rural areas of developing countries. This work aims to present a technical feasibility of the portable ultra-small power generation system based on the gasification of carbonized wood pellets/briquettes. Our project is designed for enabling independent energy production from various kinds of biomass resources in the open-field. The whole process mainly consists of two processes: biomass and waste pretreatment; gasification and power generation. The first process includes carbonization, densification (briquetting or pelletization), and the second includes updraft fixed bed gasification of carbonized pellets/briquettes, syngas purification, and power generation employing an internal combustion gas engine. A combined pretreatment processes including carbonization without external energy and densification were adopted to deal with various biomass. Carbonized pellets showed a better gasification performance than carbonized briquettes and their mixture. The 100-hour continuous operation results indicated that pelletization/briquetting of carbonized fuel realized the stable operation of an updraft gasifier if there were no blocking issues caused by the accumulation of tar. The cold gas efficiency and the carbon conversion during carbonized wood pellets gasification was about 49.2% and 70.5% with the air equivalence ratio value of around 0.32, and the corresponding overall efficiency of the gas engine was 20.3% during the stable stage. Moreover, the maximum output power was 21 kW at the air flow rate of 40 Nm³·h⁻¹. Therefore, the comprehensive system covering biomass carbonization, densification, gasification, syngas purification, and engine system is feasible for portable, ultra-small power generation. This work has been supported by Innovative Science and Technology Initiative for Security (Ministry of Defence, Japan).

Keywords: biomass carbonization, densification, distributed power generation, gasification

Procedia PDF Downloads 154
1341 An Overview of the Wind and Wave Climate in the Romanian Nearshore

Authors: Liliana Rusu

Abstract:

The goal of the proposed work is to provide a more comprehensive picture of the wind and wave climate in the Romanian nearshore, using the results provided by numerical models. The Romanian coastal environment is located in the western side of the Black Sea, the more energetic part of the sea, an area with heavy maritime traffic and various offshore operations. Information about the wind and wave climate in the Romanian waters is mainly based on observations at Gloria drilling platform (70 km from the coast). As regards the waves, the measurements of the wave characteristics are not so accurate due to the method used, being also available for a limited period. For this reason, the wave simulations that cover large temporal and spatial scales represent an option to describe better the wave climate. To assess the wind climate in the target area spanning 1992–2016, data provided by the NCEP-CFSR (U.S. National Centers for Environmental Prediction - Climate Forecast System Reanalysis) and consisting in wind fields at 10m above the sea level are used. The high spatial and temporal resolution of the wind fields is good enough to represent the wind variability over the area. For the same 25-year period, as considered for the wind climate, this study characterizes the wave climate from a wave hindcast data set that uses NCEP-CFSR winds as input for a model system SWAN (Simulating WAves Nearshore) based. The wave simulation results with a two-level modelling scale have been validated against both in situ measurements and remotely sensed data. The second level of the system, with a higher resolution in the geographical space (0.02°×0.02°), is focused on the Romanian coastal environment. The main wave parameters simulated at this level are used to analyse the wave climate. The spatial distributions of the wind speed, wind direction and the mean significant wave height have been computed as the average of the total data. As resulted from the amount of data, the target area presents a generally moderate wave climate that is affected by the storm events developed in the Black Sea basin. Both wind and wave climate presents high seasonal variability. All the results are computed as maps that help to find the more dangerous areas. A local analysis has been also employed in some key locations corresponding to highly sensitive areas, as for example the main Romanian harbors.

Keywords: numerical simulations, Romanian nearshore, waves, wind

Procedia PDF Downloads 343
1340 Thorium Extraction with Cyanex272 Coated Magnetic Nanoparticles

Authors: Afshin Shahbazi, Hadi Shadi Naghadeh, Ahmad Khodadadi Darban

Abstract:

In the Magnetically Assisted Chemical Separation (MACS) process, tiny ferromagnetic particles coated with solvent extractant are used to selectively separate radionuclides and hazardous metals from aqueous waste streams. The contaminant-loaded particles are then recovered from the waste solutions using a magnetic field. In the present study, Cyanex272 or C272 (bis (2,4,4-trimethylpentyl) phosphinic acid) coated magnetic particles are being evaluated for the possible application in the extraction of Thorium (IV) from nuclear waste streams. The uptake behaviour of Th(IV) from nitric acid solutions was investigated by batch studies. Adsorption of Thorium (IV) from aqueous solution onto adsorbent was investigated in a batch system. Adsorption isotherm and adsorption kinetic studies of Thorium (IV) onto nanoparticles coated Cyanex272 were carried out in a batch system. The factors influencing Thorium (IV) adsorption were investigated and described in detail, as a function of the parameters such as initial pH value, contact time, adsorbent mass, and initial Thorium (IV) concentration. Magnetically Assisted Chemical Separation (MACS) process adsorbent showed best results for the fast adsorption of Th (IV) from aqueous solution at aqueous phase acidity value of 0.5 molar. In addition, more than 80% of Th (IV) was removed within the first 2 hours, and the time required to achieve the adsorption equilibrium was only 140 minutes. Langmuir and Frendlich adsorption models were used for the mathematical description of the adsorption equilibrium. Equilibrium data agreed very well with the Langmuir model, with a maximum adsorption capacity of 48 mg.g-1. Adsorption kinetics data were tested using pseudo-first-order, pseudo-second-order and intra-particle diffusion models. Kinetic studies showed that the adsorption followed a pseudo-second-order kinetic model, indicating that the chemical adsorption was the rate-limiting step.

Keywords: Thorium (IV) adsorption, MACS process, magnetic nanoparticles, Cyanex272

Procedia PDF Downloads 336
1339 Metal Layer Based Vertical Hall Device in a Complementary Metal Oxide Semiconductor Process

Authors: Se-Mi Lim, Won-Jae Jung, Jin-Sup Kim, Jun-Seok Park, Hyung-Il Chae

Abstract:

This paper presents a current-mode vertical hall device (VHD) structure using metal layers in a CMOS process. The proposed metal layer based vertical hall device (MLVHD) utilizes vertical connection among metal layers (from M1 to the top metal) to facilitate hall effect. The vertical metal structure unit flows a bias current Ibias from top to bottom, and an external magnetic field changes the current distribution by Lorentz force. The asymmetric current distribution can be detected by two differential-mode current outputs on each side at the bottom (M1), and each output sinks Ibias/2 ± Ihall. A single vertical metal structure generates only a small amount of hall effect of Ihall due to the short length from M1 to the top metal as well as the low conductivity of the metal, and a series connection between thousands of vertical structure units can solve the problem by providing NxIhall. The series connection between two units is another vertical metal structure flowing current in the opposite direction, and generates negative hall effect. To mitigate the negative hall effect from the series connection, the differential current outputs at the bottom (M1) from one unit merges on the top metal level of the other unit. The proposed MLVHD is simulated in a 3-dimensional model simulator in COMSOL Multiphysics, with 0.35 μm CMOS process parameters. The simulated MLVHD unit size is (W) 10 μm × (L) 6 μm × (D) 10 μm. In this paper, we use an MLVHD with 10 units; the overall hall device size is (W) 10 μm × (L)78 μm × (D) 10 μm. The COMSOL simulation result is as following: the maximum hall current is approximately 2 μA with a 12 μA bias current and 100mT magnetic field; This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.R7117-16-0165, Development of Hall Effect Semiconductor for Smart Car and Device).

Keywords: CMOS, vertical hall device, current mode, COMSOL

Procedia PDF Downloads 300
1338 Service Business Model Canvas: A Boundary Object Operating as a Business Development Tool

Authors: Taru Hakanen, Mervi Murtonen

Abstract:

This study aims to increase understanding of the transition of business models in servitization. The significance of service in all business has increased dramatically during the past decades. Service-dominant logic (SDL) describes this change in the economy and questions the goods-dominant logic on which business has primarily been based in the past. A business model canvas is one of the most cited and used tools in defining end developing business models. The starting point of this paper lies in the notion that the traditional business model canvas is inherently goods-oriented and best suits for product-based business. However, the basic differences between goods and services necessitate changes in business model representations when proceeding in servitization. Therefore, new knowledge is needed on how the conception of business model and the business model canvas as its representation should be altered in servitized firms in order to better serve business developers and inter-firm co-creation. That is to say, compared to products, services are intangible and they are co-produced between the supplier and the customer. Value is always co-created in interaction between a supplier and a customer, and customer experience primarily depends on how well the interaction succeeds between the actors. The role of service experience is even stronger in service business compared to product business, as services are co-produced with the customer. This paper provides business model developers with a service business model canvas, which takes into account the intangible, interactive, and relational nature of service. The study employs a design science approach that contributes to theory development via design artifacts. This study utilizes qualitative data gathered in workshops with ten companies from various industries. In particular, key differences between Goods-dominant logic (GDL) and SDL-based business models are identified when an industrial firm proceeds in servitization. As the result of the study, an updated version of the business model canvas is provided based on service-dominant logic. The service business model canvas ensures a stronger customer focus and includes aspects salient for services, such as interaction between companies, service co-production, and customer experience. It can be used for the analysis and development of a current service business model of a company or for designing a new business model. It facilitates customer-focused new service design and service development. It aids in the identification of development needs, and facilitates the creation of a common view of the business model. Therefore, the service business model canvas can be regarded as a boundary object, which facilitates the creation of a common understanding of the business model between several actors involved. The study contributes to the business model and service business development disciplines by providing a managerial tool for practitioners in service development. It also provides research insight into how servitization challenges companies’ business models.

Keywords: boundary object, business model canvas, managerial tool, service-dominant logic

Procedia PDF Downloads 366
1337 Performance Tests of Wood Glues on Different Wood Species Used in Wood Workshops: Morogoro Tanzania

Authors: Japhet N. Mwambusi

Abstract:

High tropical forests deforestation for solid wood furniture industry is among of climate change contributing agents. This pressure indirectly is caused by furniture joints failure due to poor gluing technology based on improper use of different glues to different wood species which lead to low quality and weak wood-glue joints. This study was carried in order to run performance tests of wood glues on different wood species used in wood workshops: Morogoro Tanzania whereby three popular wood species of C. lusitanica, T. glandis and E. maidenii were tested against five glues of Woodfix, Bullbond, Ponal, Fevicol and Coral found in the market. The findings were necessary on developing a guideline for proper glue selection for a particular wood species joining. Random sampling was employed to interview carpenters while conducting a survey on the background of carpenters like their education level and to determine factors that influence their glues choice. Monsanto Tensiometer was used to determine bonding strength of identified wood glues to different wood species in use under British Standard of testing wood shear strength (BS EN 205) procedures. Data obtained from interviewing carpenters were analyzed through Statistical Package of Social Science software (SPSS) to allow the comparison of different data while laboratory data were compiled, related and compared by the use of MS Excel worksheet software as well as Analysis of Variance (ANOVA). Results revealed that among all five wood glues tested in the laboratory to three different wood species, Coral performed much better with the average shear strength 4.18 N/mm2, 3.23 N/mm2 and 5.42 N/mm2 for Cypress, Teak and Eucalyptus respectively. This displays that for a strong joint to be formed to all tree wood species for soft wood and hard wood, Coral has a first priority in use. The developed table of guideline from this research can be useful to carpenters on proper glue selection to a particular wood species so as to meet glue-bond strength. This will secure furniture market as well as reduce pressure to the forests for furniture production because of the strong existing furniture due to their strong joints. Indeed, this can be a good strategy on reducing climate change speed in tropics which result from high deforestation of trees for furniture production.

Keywords: climate change, deforestation, gluing technology, joint failure, wood-glue, wood species

Procedia PDF Downloads 239
1336 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination

Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana

Abstract:

Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.

Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization

Procedia PDF Downloads 504
1335 Effect of Information and Communication Technology (ICT) Usage by Cassava Farmers in Otukpo Local Government Area of Benue State, Nigeria

Authors: O. J. Ajayi, J. H. Tsado, F. Olah

Abstract:

The study analyzed the effect of information and communication technology (ICT) usage on cassava farmers in Otukpo local government area of Benue state, Nigeria. Primary data was collected from 120 randomly selected cassava farmers using multi-stage sampling technique. A structured questionnaire and interview schedule was employed to generate data. Data were analyzed using descriptive (frequency, mean and percentage) and inferential statistics (OLS (ordinary least square) and Chi-square). The result revealed that majority (78.3%) were within the age range of 21-50 years implying that the respondents were within the active age for maximum production. 96.8% of the respondents had one form of formal education or the other. The sources of ICT facilities readily available in area were radio(84.2%), television(64.2%) and mobile phone(90.8%) with the latter being the most relied upon for cassava farming. Most of the farmers were aware (98.3%) and had access (95.8%) to these ICT facilities. The dependence on mobile phone and radio were highly relevant in cassava stem selection, land selection, land preparation, cassava planting technique, fertilizer application and pest and disease management. The value of coefficient of determination (R2) indicated an 89.1% variation in the output of cassava farmers explained by the inputs indicated in the regression model implying that, there is a positive and significant relationship between the inputs and output. The results also indicated that labour, fertilizer and farm size were significant at 1% level of probability while ICT use was significant at 10%. Further findings showed that finance (78.3%) was the major constraint associated with ICT use. Recommendations were made on strengthening the use of ICT especially contemporary ones like the computer and internet among farmers for easy information sourcing which can boost agricultural production, improve livelihood and subsequently food security. This may be achieved by providing credit or subsidies and information centres like telecentres and cyber cafes through government assistance or partnership.

Keywords: ICT, cassava farmers, inputs, output

Procedia PDF Downloads 309
1334 Sensing Endocrine Disrupting Chemicals by Virus-Based Structural Colour Nanostructure

Authors: Lee Yujin, Han Jiye, Oh Jin-Woo

Abstract:

The adverse effects of endocrine disrupting chemicals (EDCs) has attracted considerable public interests. The benzene-like EDCs structure mimics the mechanisms of hormones naturally occurring in vivo, and alters physiological function of the endocrine system. Although, some of the most representative EDCs such as polychlorinated biphenyls (PCBs) and phthalates compounds already have been prohibited to produce and use in many countries, however, PCBs and phthalates in plastic products as flame retardant and plasticizer are still circulated nowadays. EDCs can be released from products while using and discarding, and it causes serious environmental and health issues. Here, we developed virus-based structurally coloured nanostructure that can detect minute EDCs concentration sensitively and selectively. These structurally coloured nanostructure exhibits characteristic angel-independent colors due to the regular virus bundle structure formation through simple pulling technique. The designed number of different colour bands can be formed through controlling concentration of virus solution and pulling speed. The virus, M-13 bacteriophage, was genetically engineered to react with specific ECDs, typically PCBs and phthalates. M-13 bacteriophage surface (pVIII major coat protein) was decorated with benzene derivative binding peptides (WHW) through phage library method. In the initial assessment, virus-based color sensor was exposed to several organic chemicals including benzene, toluene, phenol, chlorobenzene, and phthalic anhydride. Along with the selectivity evaluation of virus-based colour sensor, it also been tested for sensitivity. 10 to 300 ppm of phthalic anhydride and chlorobenzene were detected by colour sensor, and showed the significant sensitivity with about 90 of dissociation constant. Noteworthy, all measurements were analyzed through principal component analysis (PCA) and linear discrimination analysis (LDA), and exhibited clear discrimination ability upon exposure to 2 categories of EDCs (PCBs and phthalates). Because of its easy fabrication, high sensitivity, and the superior selectivity, M-13 bacteriophage-based color sensor could be a simple and reliable portable sensing system for environmental monitoring, healthcare, social security, and so on.

Keywords: M-13 bacteriophage, colour sensor, genetic engineering, EDCs

Procedia PDF Downloads 242
1333 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 255
1332 Dosimetric Application of α-Al2O3:C for Food Irradiation Using TA-OSL

Authors: A. Soni, D. R. Mishra, D. K. Koul

Abstract:

α-Al2O3:C has been reported to have deeper traps at 600°C and 900°C respectively. These traps have been reported to accessed at relatively earlier temperatures (122 and 322 °C respectively) using thermally assisted OSL (TA-OSL). In this work, the dose response α-Al2O3:C was studied in the dose range of 10Gy to 10kGy for its application in food irradiation in low ( upto 1kGy) and medium(1 to 10kGy) dose range. The TOL (Thermo-optically stimulated luminescence) measurements were carried out on RisØ TL/OSL, TL-DA-15 system having a blue light-emitting diodes (λ=470 ±30nm) stimulation source with power level set at the 90% of the maximum stimulation intensity for the blue LEDs (40 mW/cm2). The observations were carried on commercial α-Al2O3:C phosphor. The TOL experiments were carried out with number of active channel (300) and inactive channel (1). Using these settings, the sample is subjected to linear thermal heating and constant optical stimulation. The detection filter used in all observations was a Hoya U-340 (Ip ~ 340 nm, FWHM ~ 80 nm). Irradiation of the samples was carried out using a 90Sr/90Y β-source housed in the system. A heating rate of 2 °C/s was preferred in TL measurements so as to reduce the temperature lag between the heater plate and the samples. To study the dose response of deep traps of α-Al2O3:C, samples were irradiated with various dose ranging from 10 Gy to 10 kGy. For each set of dose, three samples were irradiated. In order to record the TA-OSL, initially TL was recorded up to a temperature of 400°C, to deplete the signal due to 185°C main dosimetry TL peak in α-Al2O3:C, which is also associated with the basic OSL traps. After taking TL readout, the sample was subsequently subjected to TOL measurement. As a result, two well-defined TA-OSL peaks at 121°C and at 232°C occur in time as well as temperature domain which are different from the main dosimetric TL peak which occurs at ~ 185°C. The linearity of the integrated TOL signal has been measured as a function of absorbed dose and found to be linear upto 10kGy. Thus, it can be used for low and intermediate dose range of for its application in food irradiation. The deep energy level defects of α-Al2O3:C phosphor can be accessed using TOL section of RisØ reader system.

Keywords: α-Al2O3:C, deep traps, food irradiation, TA-OSL

Procedia PDF Downloads 298
1331 Perception of Quality of Life and Self-Assessed Health in Patients Undergoing Haemodialysis

Authors: Magdalena Barbara Kaziuk, Waldemar Kosiba

Abstract:

Introduction: Despite the development of technologies and improvements in the interior of dialysis stations, dialysis remains an unpleasant procedure, difficult to accept by the patients (who undergo it 2 to 3 times a week, a single treatment lasting several hours). Haemodialysis is one of the renal replacement therapies, in Poland most commonly used in patients with chronic or acute kidney failure. Purpose: An attempt was made to evaluate the quality of life in haemodialysed patients using the WHOQOL-BREF questionnaire. Material and methods: The study covered 422 patients (200 women and 222 men, aged 60.5 ± 12.9 years) undergoing dialysis at three selected stations in Poland. The patients were divided into 2 groups, depending on the duration of their dialysis treatment. The evaluation was conducted with the WHOQOL-BREF questionnaire containing 26 questions analysing 4 areas of life, as well as the perception of the quality of life and health self-assessment. A 5-point scale is used to answer them. The maximum score in each area is 20 points. The results in individual areas have a positive direction. Results: In patients undergoing dialysis for more than 3 years, a reduction in the quality of life was found in the physical area and in their environment versus a group of patients undergoing dialysis for less than 3 years, where a reduced quality of life was found in the areas of social relations and mental well-being (p < 0.05). A significant correlation (p < 0.01) between the two groups was found in self-perceived general health, while no significant differences were observed in the general perception of the quality of life (p > 0.05). Conclusions: The study confirmed that in patients undergoing dialysis for more than three years, the quality of life is especially reduced in their environment (access to and quality of healthcare, financial resources, and mental and physical safety). The assessment of the quality of life should form a part of the therapeutic process, in which the role of the patient in chronic renal care should be emphasised, reflected in the quality of services provided by dialysis stations.

Keywords: haemodialysis, perception of quality of life, quality of services provided, dialysis station

Procedia PDF Downloads 261
1330 Verification Protocols for the Lightning Protection of a Large Scale Scientific Instrument in Harsh Environments: A Case Study

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

This paper is devoted to the study of the most suitable protocols to verify the lightning protection and ground resistance quality in a large-scale scientific facility located in a harsh environment. We illustrate this work by reviewing a case study: the largest telescopes of the Northern Hemisphere Cherenkov Telescope Array, CTA-N. This array hosts sensitive and high-speed optoelectronics instrumentation and sits on a clear, free from obstacle terrain at around 2400 m above sea level. The site offers a top-quality sky but also features challenging conditions for a lightning protection system: the terrain is volcanic and has resistivities well above 1 kOhm·m. In addition, the environment often exhibits humidities well below 5%. On the other hand, the high complexity of a Cherenkov telescope structure does not allow a straightforward application of lightning protection standards. CTA-N has been conceived as an array of fourteen Cherenkov Telescopes of two different sizes, which will be constructed in La Palma Island, Spain. Cherenkov Telescopes can provide valuable information on different astrophysical sources from the gamma rays reaching the Earth’s atmosphere. The largest telescopes of CTA are called LST’s, and the construction of the first one was finished in October 2018. The LST has a shape which resembles a large parabolic antenna, with a 23-meter reflective surface supported by a tubular structure made of carbon fibers and steel tubes. The reflective surface has 400 square meters and is made of an array of segmented mirrors that can be controlled individually by a subsystem of actuators. This surface collects and focuses the Cherenkov photons into the camera, where 1855 photo-sensors convert the light in electrical signals that can be processed by dedicated electronics. We describe here how the risk assessment of direct strike impacts was made and how down conductors and ground system were both tested. The verification protocols which should be applied for the commissioning and operation phases are then explained. We stress our attention on the ground resistance quality assessment.

Keywords: grounding, large scale scientific instrument, lightning risk assessment, lightning standards and safety

Procedia PDF Downloads 122
1329 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 271
1328 Influence of the Induction Program on Novice Teacher Retention In One Specialized School in Nur-Sultan

Authors: Almagul Nurgaliyeva

Abstract:

The phenomenon of novice teacher attrition is an urgent issue. The effective mechanisms to increase the retention rate of novice teachers relate to the nature and level of support provided at an employing site. This study considered novice teacher retention as a motivation-based process, which is based on a variety of support activities employed to satisfy novice teachers’ needs at an early career stage. The purpose of the study was to examine novice teachers’ perceptions of the effectiveness of the induction program and other support structure(s) at a secondary school in Nur-Sultan. The study was guided by Abraham Maslow’s (1943) theory of motivation. Maslow’s hierarchy of needs was used as a theoretical framework to identify the novice teachers’ primary needs and the extent to which the induction programs and other support mechanisms provided by the school administrators fulfill those needs. One school supervisor and eight novice teachers (four current and four former novice teachers) with a maximum of four years of teaching experience took part in the study. To investigate the perspectives and experiences of the participants, an online semi-structured interview was utilized. The responses were collected and analyzed. The study revealed four major challenges: educational, personal-psychological, sociological, and structural which are seen as the main constraints during the adaptation period. Four induction activities, as emerged from the data, are being carried out by the school to address novice teachers’ challenges: socialization activities, mentoring programs, professional development, and administrative support. These activities meet novice teachers’ needs and confront the challenges they face. Sufficient and adequate support structures provided to novice teachers during their first years of working experience is essential, as they may influence their decision to remain in the teaching profession, thereby reducing the attrition rate. The study provides recommendations for policymakers and school administrators about the structure and the content of induction program activities.

Keywords: beginning teacher induction, induction programme, orientation programmes, adaptation challenges, novice teacher retention

Procedia PDF Downloads 86
1327 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values

Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou

Abstract:

A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.

Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings

Procedia PDF Downloads 326
1326 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 196
1325 Cedrela Toona Roxb.: An Exploratory Study Describing Its Antidiabetic Property

Authors: Kinjal H. Shah, Piyush M. Patel

Abstract:

Diabetes mellitus is considered to be a serious endocrine syndrome. Synthetic hypoglycemic agents can produce serious side effects including hematological effects, coma, and disturbances of the liver and kidney. In addition, they are not suitable for use during pregnancy. In recent years, there have been relatively few reports of short-term side effects or toxicity due to sulphonylureas. Published figures and frequency of side effects in large series of patient range from about 1 to 5%, with symptoms severe enough to lead to the withdrawal of the drug in less than 1 to 2%. Adverse effects, in general, have been of the following type: allergic skin reactions, gastrointestinal disturbances, blood dyscrasias, hepatic dysfunction, and hypoglycemia. The associated disadvantages with insulin and oral hypoglycemic agents have led to stimulation in the research for locating natural resources showing antidiabetic activity and to explore the possibilities of using traditional medicines with proper chemical and pharmacological profiles. Literature survey reveals that the inhabitants of Abbottabad district of Pakistan use the dried leaf powder along with table salt and water orally for treating diabetes, skin allergy, wounds and as a blood purifier, where they pronounced the plant locally as ‘Nem.' The detailed phytochemical investigation of the Cedrela toona Roxb. leaves for antidiabetic activity has not been documented. Hence, there is a need for phytochemical investigation of the leaves for antidiabetic activity. The collection of fresh leaves and authentification followed by successive extraction, phytochemical screening, and testing of antidiabetic activity. The blood glucose level was reduced maximum in ethanol extract at 5th and 7th h after treatment. Blood glucose was depressed by 8.2% and 10.06% in alloxan – induced diabetic rats after treatment which was comparable to the standard drug, Glibenclamide. This may be due to the activation of the existing pancreatic cells in diabetic rats by the ethanolic extract.

Keywords: antidiabetic, Cedrela toona Roxb., phytochemical screening, blood glucose

Procedia PDF Downloads 258
1324 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.

Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)

Procedia PDF Downloads 128
1323 Three Types of Mud-Huts with Courtyards in Composite Climate: Thermal Performance in Summer and Winter

Authors: Janmejoy Gupta, Arnab Paul, Manjari Chakraborty

Abstract:

Jharkhand is a state located in the eastern part of India. The Tropic of Cancer (23.5 degree North latitude line) passes through Ranchi district in Jharkhand. Mud huts with burnt clay tiled roofs in Jharkhand are an integral component of the state’s vernacular architecture. They come in various shapes, with a number of them having a courtyard type of plan. In general, it has been stated that designing dwellings with courtyards in them is a climate-responsive strategy in composite climate. The truth behind this hypothesis is investigated in this paper. In this paper, three types of mud huts with courtyards situated in Ranchi district in Jharkhand are taken as a study and through temperature measurements in the south-side rooms and courtyards, in addition to Autodesk Ecotect (Version 2011) software simulations, their thermal performance throughout the year are observed. Temperature measurements are specifically taken during the peak of summer and winter and the average temperatures in the rooms and courtyards during seven day-periods in peak of summer and peak of winter are plotted graphically. Thereafter, on the basis of the study and software simulations, the hypothesis is verified and the thermally better performing dwelling types in summer and winter identified among the three sub-types studied. Certain recommendations with respect to increasing thermal comfort in courtyard type mud huts in general are also made. It is found that all courtyard type dwellings do not necessarily show better thermal performance in summer and winter in composite climate. The U shaped dwelling with open courtyard on southern side offers maximum amount of thermal-comfort inside the rooms in the hotter part of the year and the square hut with a central courtyard, with the courtyard being closed from all sides, shows superior thermal performance in winter. The courtyards in all the three case-studies are found to get excessively heated up during summer.

Keywords: courtyard, mud huts, simulations, temperature measurements, thermal performance

Procedia PDF Downloads 405
1322 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images

Authors: Xiang Shijie, Zhou Dong, Tian Dan

Abstract:

This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.

Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition

Procedia PDF Downloads 21