Search results for: judd-ofelt intensity parameters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10216

Search results for: judd-ofelt intensity parameters

7036 Comparison of Whole-Body Vibration and Plyometric Exercises on Explosive Power in Non-Athlete Girl Students

Authors: Fereshteh Zarei, Mahdi Kohandel

Abstract:

The aim of this study was investigate and compare plyometric and vibration exercises on muscle explosive power in non-athlete female students. For this purpose, 45 female students from non-athletes selected target then divided in to the three groups, two experimental and one control groups. From all groups were getting pre-tested. Experimental A did whole-body vibration exercises involved standing on one of machine vibration with frequency 30 Hz, amplitude 10 mm and in 5 different postures. Training for each position was 40 seconds with 60 seconds rest between it, and each season 5 seconds was added to duration of each body condition, until time up to 2 minutes for each postures. Exercises were done three times a week for 2 month. Experimental group B did plyometric exercises that include jumping, such as horizontal, vertical, and skipping .They included 10 times repeat for 5 set in each season. Intensity with increasing repetitions and sets were added. At this time, asked from control group that keep a daily activity and avoided strength training, explosive power and. after do exercises by groups we measured factors again. One-way analysis of variance and paired t statistical methods were used to analyze the data. There was significant difference in the amount of explosive power between the control and vibration groups (p=0/048) there was significant difference between the control and plyometric groups (019/0 = p). But between vibration and plyometric groups didn't observe significant difference in the amount of explosive power.

Keywords: vibration, plyometric, exercises, explosive power, non-athlete

Procedia PDF Downloads 447
7035 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering

Procedia PDF Downloads 126
7034 The Influence of Physical Activity and Health Literacy on Depression Level of First and Second Turkish Generation Living in Germany

Authors: Ceren Akyüz, Ingo Froboese

Abstract:

Health literacy has gained importance with the further spread of the coronavirus disease (COVID-19) worldwide and has been associated with health status in various chronic diseases. Many studies indicate that mental health can be improved by low- or moderate-intensity activity, and several studies have been proposed to explain the relationship between physical activity and mental health. The aim of the present study is to investigate the levels of physical activity, health literacy, and depression in first- and- second generation Turkish people in Germany. The research consists of 434 participants (255 females, 179 males; age 38.09 ± 13.73). 40.8 % of participants are married, and 59.2 % of participants are single. Education levels are mostly at university level (54.8 %), and graduate level is 18.9 %. While 24.9 % of the participants are second generation, 75.1 % of participants are first generation. All analyses were stratified on gender, marital status, education, generation and income status, and five age categories: 18–30, 31–40, 41–50, 51–60, and 61–79, which were defined to account for age-specific trends while maintaining sufficient cell size for statistical analysis. A correlation of depression with physical activity and health literacy levels between first- and- second generation Turks in Germany was evaluated in order to find out whether there are significant differences between the two populations and demographic variables (gender, marital status, education, generation, income status) with carrying out questionnaires which are European Health Literacy Survey Questionnaire (HLS-EU-Q47), International Physical Activity Questionnaire ( IPAQ) and the Patient Health Questionnaire-9 (PHQ-9).

Keywords: health literacy, turks in germany, migrants, depression, physical activity

Procedia PDF Downloads 76
7033 Effect of Cap and Trade Policies for Carbon Emission Reduction on Delhi Households

Authors: Vikram Singh

Abstract:

This paper aims to take into account carbon tax or cap-and-trade legislation to manage Delhi carbon emissions after a post-Kyoto treaty. This report estimated the influence of the carbon taxes or rebate/compensation cost at the household level. Here, the three possible scenarios will help to comprehend the difference between a straightforward compensation/rebate, and two clearly denoting progressive formula. The straightforward compensation is basically minimizing the regressive applications that will bears on cost. On the other hand, both the progressive formula will generate extra revenue, which will help for feasibility of more efficient, vehicles, appliances and buildings in the low-income household. For the hypothetical case of carbon price $40/tonne, low-income household for both urban and rural region could experience price burden up to 5% and 9% on their income as compared to 3% and 7% for high-income household respectively. The survey report also shown that carbon emission due low-income household are primarily by the substantive requirement like housing and transportation whereas almost 40% emission due to high-income household are by luxurious and non-essential items. The equal distribution of revenue cum incentives will not completely overcome high-income household’s investment in inessential items. However, it will merely help in investing their income in energy efficient and less carbon intensive items. Therefore, the rebate distribution on per capita basis instead on per households will benefit more especially large families at low-income group.

Keywords: household emission, carbon credit, carbon intensity, green house gas emission, carbon generation based insentives

Procedia PDF Downloads 430
7032 Dual Set Point Governor Control Structure with Common Optimum Temporary Droop Settings for both Islanded and Grid Connected Modes

Authors: Deepen Sharma, Eugene F. Hill

Abstract:

For nearly 100 years, hydro-turbine governors have operated with only a frequency set point. This natural governor action means that the governor responds with changing megawatt output to disturbances in system frequency. More and more, power system managers are demanding that governors operate with constant megawatt output. One way of doing this is to introduce a second set point in the control structure called a power set point. The control structure investigated and analyzed in this paper is unique in the way that it utilizes a power reference set point in addition to the conventional frequency reference set point. An optimum set of temporary droop parameters derived based on the turbine-generator inertia constant and the penstock water start time for stable islanded operation are shown to be also equally applicable for a satisfactory rate of generator loading during its grid connected mode. A theoretical development shows why this is the case. The performance of the control structure has been investigated and established based on the simulation study made in MATLAB/Simulink as well as through testing the real time controller performance on a 15 MW Kaplan Turbine and generator. Recordings have been made using the labVIEW data acquisition platform. The hydro-turbine governor control structure and its performance investigated in this paper thus eliminates the need to have a separate set of temporary droop parameters, one valid for islanded mode and the other for interconnected operations mode.

Keywords: frequency set point, hydro governor, interconnected operation, isolated operation, power set point

Procedia PDF Downloads 365
7031 Modified Mangrove Pens for Polyculture System in Mud Crab (Scylla serrata) and Milkfish (Chanos chanos) Production

Authors: Laurence G. Almoguera, Vitaliana U. Malamug, Armando N. Espino, Marvin M. Cinense

Abstract:

The mangrove pens were modified to produce mud crab (Scylla serrata) and milkfish (Chanos chanos) in a polyculture system. The modification of mangrove pens was done by adding excavations inside the pen. The water quality parameters (dissolved oxygen, pH, salinity, and temperature) were monitored, the recovery and the production rate in each pen were evaluated. The experiment was conducted for a rearing period of 143 days in nine mangrove pens, each having an area of 32 m² with an average net enclosure height of 3 m from the soil surface. The three different pens constructed (existing design - with canal only, with 43% excavation by area, and 54% excavation by area) were designated as T₁, T₂, and T₃, respectively. All experimental units were stocked with 31 pieces of crablets (with 33.3 g average weight) and additional 130 pieces of milkfish fingerlings (with 0.11 g average weight) to the modified mangrove pens. The water quality parameters recorded in the pens were favorable for the growth and recovery of the mud crab and milkfish, except for dissolved oxygen (DO). It was found to be the reason for the total mortality of the stocked milkfish. For mud crab, the highest mean recovery was recorded in T₂ (34.41%), followed by T₃ (26.91%) and the lowest in T1 (21.50%). The production rate followed the same trend as the recovery, where T₂ (74.49 g/m²) obtained the highest, followed by T₃ (55 g/m2) and the lowest was in T₁ (34.87 g/m²). The statistical analysis revealed that the variations of the mud crab recovery were not significant, while in terms of production rate, modified mangrove pens were found to be more effective than the existing design. Due to the total mortality of the cultured milkfish, the current set-up of modified mangrove pens was found to be not suitable for the polyculture system of milkfish and mud crab production.

Keywords: aquasilviculture, milkfish, modified mangrove pen, mud crab, polyculture, production rate

Procedia PDF Downloads 190
7030 Process Development of pVAX1/lacZ Plasmid DNA Purification Using Design of Experiment

Authors: Asavasereerat K., Teacharsripaitoon T., Tungyingyong P., Charupongrat S., Noppiboon S. Hochareon L., Kitsuban P.

Abstract:

Third generation of vaccines is based on gene therapy where DNA is introduced into patients. The antigenic or therapeutic proteins encoded from transgenes DNA triggers an immune-response to counteract various diseases. Moreover, DNA vaccine offers the customization of its ability on protection and treatment with high stability. The production of DNA vaccines become of interest. According to USFDA guidance for industry, the recommended limits for impurities from host cell are lower than 1%, and the active conformation homogeneity supercoiled DNA, is more than 80%. Thus, the purification strategy using two-steps chromatography has been established and verified for its robustness. Herein, pVax1/lacZ, a pre-approved USFDA DNA vaccine backbone, was used and transformed into E. coli strain DH5α. Three purification process parameters including sample-loading flow rate, the salt concentration in washing and eluting buffer, were studied and the experiment was designed using response surface method with central composite face-centered (CCF) as a model. The designed range of selected parameters was 10% variation from the optimized set point as a safety factor. The purity in the percentage of supercoiled conformation obtained from each chromatography step, AIEX and HIC, were analyzed by HPLC. The response data were used to establish regression model and statistically analyzed followed by Monte Carlo simulation using SAS JMP. The results on the purity of the product obtained from AIEX and HIC are between 89.4 to 92.5% and 88.3 to 100.0%, respectively. Monte Carlo simulation showed that the pVAX1/lacZ purification process is robust with confidence intervals of 0.90 in range of 90.18-91.00% and 95.88-100.00%, for AIEX and HIC respectively.

Keywords: AIEX, DNA vaccine, HIC, puification, response surface method, robustness

Procedia PDF Downloads 201
7029 Influence of Loudness Compression on Hearing with Bone Anchored Hearing Implants

Authors: Anja Kurz, Marc Flynn, Tobias Good, Marco Caversaccio, Martin Kompis

Abstract:

Bone Anchored Hearing Implants (BAHI) are routinely used in patients with conductive or mixed hearing loss, e.g. if conventional air conduction hearing aids cannot be used. New sound processors and new fitting software now allow the adjustment of parameters such as loudness compression ratios or maximum power output separately. Today it is unclear, how the choice of these parameters influences aided speech understanding in BAHI users. In this prospective experimental study, the effect of varying the compression ratio and lowering the maximum power output in a BAHI were investigated. Twelve experienced adult subjects with a mixed hearing loss participated in this study. Four different compression ratios (1.0; 1.3; 1.6; 2.0) were tested along with two different maximum power output settings, resulting in a total of eight different programs. Each participant tested each program during two weeks. A blinded Latin square design was used to minimize bias. For each of the eight programs, speech understanding in quiet and in noise was assessed. For speech in quiet, the Freiburg number test and the Freiburg monosyllabic word test at 50, 65, and 80 dB SPL were used. For speech in noise, the Oldenburg sentence test was administered. Speech understanding in quiet and in noise was improved significantly in the aided condition in any program, when compared to the unaided condition. However, no significant differences were found between any of the eight programs. In contrast, on a subjective level there was a significant preference for medium compression ratios of 1.3 to 1.6 and higher maximum power output.

Keywords: Bone Anchored Hearing Implant, baha, compression, maximum power output, speech understanding

Procedia PDF Downloads 382
7028 Development of a Geomechanical Risk Assessment Model for Underground Openings

Authors: Ali Mortazavi

Abstract:

The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).

Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering

Procedia PDF Downloads 141
7027 White Light Emitting Carbon Dots- Surface Modification of Carbon Dots Using Auxochromes

Authors: Manasa Perikala, Asha Bhardwaj

Abstract:

Fluorescent carbon dots (CDs), a young member of Carbon nanomaterial family, has gained a lot of research attention across the globe due to its highly luminescent emission properties, non-toxic behavior, stable emission properties, and zero re-absorption lose. These dots have the potential to replace the use of traditional semiconductor quantum dots in light-emitting devices (LED’s, fiber lasers) and other photonic devices (temperature sensor, UV detector). However, One major drawback of Carbon dots is that, till date, the actual mechanism of photoluminescence (PL) in carbon dots is still an open topic of discussion among various researchers across the globe. PL mechanism of CDs based on wide particle size distribution, the effect of surface groups, hybridization in carbon, and charge transfer mechanisms have been proposed. Although these mechanisms explain PL of CDs to an extent, no universally accepted mechanism to explain complete PL behavior of these dots is put forth. In our work, we report parameters affecting the size and surface of CDs, such as time of the reaction, synthesis temperature and concentration of precursors and their effects on the optical properties of the carbon dots. The effect of auxochromes on the emission properties and re-modification of carbon surface using an external surface functionalizing agent is discussed in detail. All the explanations have been supported by UV-Visible absorption, emission spectroscopies, Fourier transform infrared spectroscopy and Transmission electron microscopy and X-Ray diffraction techniques. Once the origin of PL in CDs is understood, parameters affecting PL centers can be modified to tailor the optical properties of these dots, which can enhance their applications in the fabrication of LED’s and other photonic devices out of these carbon dots.

Keywords: carbon dots, photoluminescence, size effects on emission in CDs, surface modification of carbon dots

Procedia PDF Downloads 128
7026 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 513
7025 Porous Bluff-Body Disc on Improving the Gas-Mixing Efficiency

Authors: Shun-Chang Yen, You-Lun Peng, Kuo-Ching San

Abstract:

A numerical study on a bluff-body structure with multiple holes was conducted using ANSYS Fluent computational fluid dynamics analysis. The effects of the hole number and jet inclination angles were considered under a fixed gas flow rate and nonreactive gas. The bluff body with multiple holes can transform the axial momentum into a radial and tangential momentum as well as increase the swirl number (S). The concentration distribution in the mixing of a central carbon dioxide (CO2) jet and an annular air jet was utilized to analyze the mixing efficiency. Three bluff bodies with differing hole numbers (H = 3, 6, and 12) and three jet inclination angles (θ = 45°, 60°, and 90°) were designed for analysis. The Reynolds normal stress increases with the inclination angle. The Reynolds shear stress, average turbulence intensity, and average swirl number decrease with the inclination angle. For an unsymmetrical hole configuration (i.e., H = 3), the streamline patterns exhibited an unsymmetrical flow field. The highest mixing efficiency (i.e., the lowest integral gas fraction of CO2) occurred at H = 3. Furthermore, the highest swirl number coincided with the strongest effect on the mass fraction of CO2. Therefore, an unsymmetrical hole arrangement induced a high swirl flow behind the porous disc.

Keywords: bluff body with multiple holes, computational fluid dynamics, swirl-jet flow, mixing efficiency

Procedia PDF Downloads 351
7024 Improving Diagnostic Accuracy of Ankle Syndesmosis Injuries: A Comparison of Traditional Radiographic Measurements and Computed Tomography-Based Measurements

Authors: Yasar Samet Gokceoglu, Ayse Nur Incesu, Furkan Okatar, Berk Nimetoglu, Serkan Bayram, Turgut Akgul

Abstract:

Ankle syndesmosis injuries pose a significant challenge in orthopedic practice due to their potential for prolonged recovery and chronic ankle dysfunction. Accurate diagnosis and management of these injuries are essential for achieving optimal patient outcomes. The use of radiological methods, such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), plays a vital role in the accurate diagnosis of syndesmosis injuries in the context of ankle fractures. Treatment options for ankle syndesmosis injuries vary, with surgical interventions such as screw fixation and suture-button implantation being commonly employed. The choice of treatment is influenced by the severity of the injury and the presence of associated fractures. Additionally, the mechanism of injury, such as pure syndesmosis injury or specific fracture types, can impact the stability and management of syndesmosis injuries. Ankle fractures with syndesmosis injury present a complex clinical scenario, requiring accurate diagnosis, appropriate reduction, and tailored management strategies. The interplay between the mechanism of injury, associated fractures, and treatment modalities significantly influences the outcomes of these challenging injuries. The long-term outcomes and patient satisfaction following ankle fractures with syndesmosis injury are crucial considerations in the field of orthopedics. Patient-reported outcome measures, such as the Foot and Ankle Outcome Score (FAOS), provide essential information about functional recovery and quality of life after these injuries. When diagnosing syndesmosis injuries, standard measurements, such as the medial clear space, tibiofibular overlap, tibiofibular clear space, anterior tibiofibular ratio (ATFR), and the anterior-posterior tibiofibular ratio (APTF), are assessed through radiographs and computed tomography (CT) scans. These parameters are critical in evaluating the presence and severity of syndesmosis injuries, enabling clinicians to choose the most appropriate treatment approach. Despite advancements in diagnostic imaging, challenges remain in accurately diagnosing and treating ankle syndesmosis injuries. Traditional diagnostic parameters, while beneficial, may not capture the full extent of the injury or provide sufficient information to guide therapeutic decisions. This gap highlights the need for exploring additional diagnostic parameters that could enhance the accuracy of syndesmosis injury diagnoses and inform treatment strategies more effectively. The primary goal of this research is to evaluate the usefulness of traditional radiographic measurements in comparison to new CT-based measurements for diagnosing ankle syndesmosis injuries. Specifically, this study aims to assess the accuracy of conventional parameters, including medial clear space, tibiofibular overlap, tibiofibular clear space, ATFR, and APTF, in contrast with the recently proposed CT-based measurements such as the delta and gamma angles. Moreover, the study intends to explore the relationship between these diagnostic parameters and functional outcomes, as measured by the Foot and Ankle Outcome Score (FAOS). Establishing a correlation between specific diagnostic measurements and FAOS scores will enable us to identify the most reliable predictors of functional recovery following syndesmosis injuries. This comparative analysis will provide valuable insights into the accuracy and dependability of CT-based measurements in diagnosing ankle syndesmosis injuries and their potential impact on predicting patient outcomes. The results of this study could greatly influence clinical practices by refining diagnostic criteria and optimizing treatment planning for patients with ankle syndesmosis injuries.

Keywords: ankle syndesmosis injury, diagnostic accuracy, computed tomography, radiographic measurements, Tibiofibular syndesmosis distance

Procedia PDF Downloads 68
7023 Apparent Temperature Distribution on Scaffoldings during Construction Works

Authors: I. Szer, J. Szer, K. Czarnocki, E. Błazik-Borowa

Abstract:

People on construction scaffoldings work in dynamically changing, often unfavourable climate. Additionally, this kind of work is performed on low stiffness structures at high altitude, which increases the risk of accidents. It is therefore desirable to define the parameters of the work environment that contribute to increasing the construction worker occupational safety level. The aim of this article is to present how changes in microclimate parameters on scaffolding can impact the development of dangerous situations and accidents. For this purpose, indicators based on the human thermal balance were used. However, use of this model under construction conditions is often burdened by significant errors or even impossible to implement due to the lack of precise data. Thus, in the target model, the modified parameter was used – apparent environmental temperature. Apparent temperature in the proposed Scaffold Use Risk Assessment Model has been a perceived outdoor temperature, caused by the combined effects of air temperature, radiative temperature, relative humidity and wind speed (wind chill index, heat index). In the paper, correlations between component factors and apparent temperature for facade scaffolding with a width of 24.5 m and a height of 42.3 m, located at south-west side of building are presented. The distribution of factors on the scaffolding has been used to evaluate fitting of the microclimate model. The results of the studies indicate that observed ranges of apparent temperature on the scaffolds frequently results in a worker’s inability to adapt. This leads to reduced concentration and increased fatigue, adversely affects health, and consequently increases the risk of dangerous situations and accidental injuries

Keywords: apparent temperature, health, safety work, scaffoldings

Procedia PDF Downloads 176
7022 New Neuroplasmonic Sensor Based on Soft Nanolithography

Authors: Seyedeh Mehri Hamidi, Nasrin Asgari, Foozieh Sohrabi, Mohammad Ali Ansari

Abstract:

New neuro plasmonic sensor based on one dimensional plasmonic nano-grating has been prepared. To record neural activity, the sample has been exposed under different infrared laser and then has been calculated by ellipsometry parameters. Our results show that we have efficient sensitivity to different laser excitation.

Keywords: neural activity, Plasmonic sensor, Nanograting, Gold thin film

Procedia PDF Downloads 392
7021 Seismic Behaviour of Bi-Symmetric Buildings

Authors: Yogendra Singh, Mayur Pisode

Abstract:

Many times it is observed that in multi-storeyed buildings the dynamic properties in the two directions are similar due to which there may be a coupling between the two orthogonal modes of the building. This is particularly observed in bi-symmetric buildings (buildings with structural properties and periods approximately equal in the two directions). There is a swapping of vibrational energy between the modes in the two orthogonal directions. To avoid this coupling the draft revision of IS:1893 proposes a minimum separation of more than 15% between the frequencies of the fundamental modes in the two directions. This study explores the seismic behaviour of bi-symmetrical buildings under uniaxial and bi-axial ground motions. For this purpose, three different types of 8 storey buildings symmetric in plan are modelled. The first building has square columns, resulting in identical periods in the two directions. The second building, with rectangular columns, has a difference of 20% in periods in orthogonal directions, and the third building has half of the rectangular columns aligned in one direction and other half aligned in the other direction. The numerical analysis of the seismic response of these three buildings is performed by using a set of 22 ground motions from PEER NGA database and scaled as per FEMA P695 guidelines to represent the same level of intensity corresponding to the Design Basis Earthquake. The results are analyzed in terms of the displacement-time response of the buildings at roof level and corresponding maximum inter-storey drift ratios.

Keywords: bi-symmetric buildings, design code, dynamic coupling, multi-storey buildings, seismic response

Procedia PDF Downloads 236
7020 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air

Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli

Abstract:

Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.

Keywords: air pollution, dust, numerical modeling, urban

Procedia PDF Downloads 178
7019 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers

Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek

Abstract:

Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.

Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations

Procedia PDF Downloads 131
7018 Sustainable Use of Laura Lens during Drought

Authors: Kazuhisa Koda, Tsutomu Kobayashi

Abstract:

Laura Island, which is located about 50 km away from downtown, is a source of water supply in Majuro atoll, which is the capital of the Republic of the Marshall Islands. Low and flat Majuro atoll has neither river nor lake. It is very important for Majuro atoll to ensure the conservation of its water resources. However, up-coning, which is the process of partial rising of the freshwater-saltwater boundary near the water-supply well, was caused by the excess pumping from it during the severe drought in 1998. Up-coning will make the water usage of the freshwater lens difficult. Thus, appropriate water usage is required to prevent up-coning in the freshwater lens because there is no other water source during drought. Numerical simulation of water usage applying SEAWAT model was conducted at the central part of Laura Island, including the water-supply well, which was affected by up-coning. The freshwater lens was created as a result of infiltration of consistent average rainfall. The lens shape was almost the same as the one in 1985. 0 of monthly rainfall and variable daily pump discharge were used to calculate the sustainable pump discharge from the water-supply well. Consequently, the total amount of pump discharge was increased as the daily pump discharge was increased, indicating that it needs more time to recover from up-coning. Thus, a pump standard to reduce the pump intensity is being proposed, which is based on numerical simulation concerning the occurrence of the up-coning phenomenon in Laura Island during the drought.

Keywords: freshwater lens, islands, numerical simulation, sustainable water use

Procedia PDF Downloads 288
7017 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 77
7016 Influence of Brazing Process Parameters on the Mechanical Properties of Nickel Based Superalloy

Authors: M. Zielinska, B. Daniels, J. Gabel, A. Paletko

Abstract:

A common nickel based superalloy Inconel625 was brazed with Ni-base braze filler material (AMS4777) containing melting-point-depressants such as B and Si. Different braze gaps, brazing times and forms of braze filler material were tested. It was determined that the melting point depressants B and Si tend to form hard and brittle phases in the joint during the braze cycle. Brittle phases significantly reduce mechanical properties (e. g. tensile strength) of the joint. Therefore, it is important to define optimal process parameters to achieve high strength joints, free of brittle phases. High ultimate tensile strength (UTS) values can be obtained if the joint area is free of brittle phases, which is equivalent to a complete isothermal solidification of the joint. Isothermal solidification takes place only if the concentration of the melting point depressant in the braze filler material of the joint is continuously reduced by diffusion into the base material. For a given brazing temperature, long brazing times and small braze filler material volumes (small braze gaps) are beneficial for isothermal solidification. On the base of the obtained results it can be stated that the form of the braze filler material has an additional influence on the joint quality. Better properties can be achieved by the use of braze-filler-material in form of foil instead of braze-filler-material in form of paste due to a reduced amount of voids and a more homogeneous braze-filler-material-composition in the braze-gap by using foil.

Keywords: diffusion brazing, microstructure, superalloy, tensile strength

Procedia PDF Downloads 358
7015 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 171
7014 Determining Design Parameters for Sizing of Hydronic Heating Systems in Concrete Thermally Activated Building Systems

Authors: Rahmat Ali, Inamullah Khan, Amjad Naseer, Abid A. Shah

Abstract:

Hydronic Heating and Cooling systems in concrete slab based buildings are increasingly becoming a popular substitute to conventional heating and cooling systems. In exploring the materials, techniques employed, and their relative performance measures, a fair bit of uncertainty exists. This research has identified the simplest method of determining the thermal field of a single hydronic pipe when acting as a part of a concrete slab, based on which the spacing and positioning of pipes for a best thermal performance and surface temperature control are determined. The pipe material chosen is the commonly used PEX pipe, which has an all-around performance and thermal characteristics with a thermal conductivity of 0.5W/mK. Concrete Test samples were constructed and their thermal fields tested under varying input conditions. Temperature sensing devices were embedded into the wet concrete at fixed distances from the pipe and other touch sensing temperature devices were employed for determining the extent of the thermal field and validation studies. In the first stage, it was found that the temperature along a specific distance was the same and that heat dissipation occurred in well-defined layers. The temperature obtained in concrete was then related to the different control parameters including water supply temperature. From the results, the temperature of water required for a specific temperature rise in concrete is determined. The thermally effective area is also determined which is then used to calculate the pipe spacing and positioning for the desired level of thermal comfort.

Keywords: thermally activated building systems, concrete slab temperature, thermal field, energy efficiency, thermal comfort, pipe spacing

Procedia PDF Downloads 329
7013 Microsimulation of Potential Crashes as a Road Safety Indicator

Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale

Abstract:

Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.

Keywords: road safety, traffic, traffic safety, traffic simulation

Procedia PDF Downloads 132
7012 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 305
7011 A Study of Preliminary Findings of Behavioral Patterns under Captive Conditions in Chinkara (Gazella bennettii) with Prospects for Future Conservation

Authors: Muhammad Idnan, Arshad Javid, Muhammad Nadeem

Abstract:

The present study was conducted from April 2013 to March 2014 to observe the behavioral parameters of Chinkara (Gazella bennettii) under captive conditions by comparing the captive-born and wild-caught animals for conservation strategies. Understanding the behavioral conformations plays a significant role in captive management. Due to human population explosion and mechanized hunting, the captive breeding seems to be the best way for sports hunting, bush meat, for leather industry and horns for traditional medicinal usage. Primarily, captive management has been used on trial and error basis due to deficiency of ethology of this least concerned species. Behavior of [(20 wild-caught (WC) and 10 captive-bred (CB)] adult Chinkara was observed at captive breeding facilities for ungulates at Ravi Campus, University of Veterinary and Animal Sciences at Kasur district which is situated on southeast side of Lahore. The average annual rainfall is about 650 mm, with frequent raining during monsoon. A focal sample was used to observe the various behavioral patterns for CB and WC chinkara. A similarity was observed in behavioral parameters in WC and CB animals, however, when the differences were considered, WC male deer showed a significantly higher degree of agonistic interaction as compared to the CB male chinkara. These findings suggest that there is no immediate impact of captivity on behavior of chinkara nevertheless 10 generations of captivity. It is suggested that the Chinkara is not suitable for domestication and for successful deer farming, a further study is recommended for ethology of chinkara.

Keywords: Chinkara (Gazella bennettii), domestication, deer farming, ex-situ conservation

Procedia PDF Downloads 157
7010 Blood Flow Simulations to Understand the Role of the Distal Vascular Branches of Carotid Artery in the Stroke Prediction

Authors: Muhsin Kizhisseri, Jorg Schluter, Saleh Gharie

Abstract:

Atherosclerosis is the main reason of stroke, which is one of the deadliest diseases in the world. The carotid artery in the brain is the prominent location for atherosclerotic progression, which hinders the blood flow into the brain. The inclusion of computational fluid dynamics (CFD) into the diagnosis cycle to understand the hemodynamics of the patient-specific carotid artery can give insights into stroke prediction. Realistic outlet boundary conditions are an inevitable part of the numerical simulations, which is one of the major factors in determining the accuracy of the CFD results. The Windkessel model-based outlet boundary conditions can give more realistic characteristics of the distal vascular branches of the carotid artery, such as the resistance to the blood flow and compliance of the distal arterial walls. This study aims to find the most influential distal branches of the carotid artery by using the Windkessel model parameters in the outlet boundary conditions. The parametric study approach to Windkessel model parameters can include the geometrical features of the distal branches, such as radius and length. The incorporation of the variations of the geometrical features of the major distal branches such as the middle cerebral artery, anterior cerebral artery, and ophthalmic artery through the Windkessel model can aid in identifying the most influential distal branch in the carotid artery. The results from this study can help physicians and stroke neurologists to have a more detailed and accurate judgment of the patient's condition.

Keywords: stroke, carotid artery, computational fluid dynamics, patient-specific, Windkessel model, distal vascular branches

Procedia PDF Downloads 209
7009 Modeling Depth Averaged Velocity and Boundary Shear Stress Distributions

Authors: Ebissa Gadissa Kedir, C. S. P. Ojha, K. S. Hari Prasad

Abstract:

In the present study, the depth-averaged velocity and boundary shear stress in non-prismatic compound channels with three different converging floodplain angles ranging from 1.43ᶱ to 7.59ᶱ have been studied. The analytical solutions were derived by considering acting forces on the channel beds and walls. In the present study, five key parameters, i.e., non-dimensional coefficient, secondary flow term, secondary flow coefficient, friction factor, and dimensionless eddy viscosity, were considered and discussed. An expression for non-dimensional coefficient and integration constants was derived based on the boundary conditions. The model was applied to different data sets of the present experiments and experiments from other sources, respectively, to examine and analyse the influence of floodplain converging angles on depth-averaged velocity and boundary shear stress distributions. The results show that the non-dimensional parameter plays important in portraying the variation of depth-averaged velocity and boundary shear stress distributions with different floodplain converging angles. Thus, the variation of the non-dimensional coefficient needs attention since it affects the secondary flow term and secondary flow coefficient in both the main channel and floodplains. The analysis shows that the depth-averaged velocities are sensitive to a shear stress-dependent model parameter non-dimensional coefficient, and the analytical solutions are well agreed with experimental data when five parameters are included. It is inferred that the developed model may facilitate the interest of others in complex flow modeling.

Keywords: depth-average velocity, converging floodplain angles, non-dimensional coefficient, non-prismatic compound channels

Procedia PDF Downloads 70
7008 Enhancing the Luminescence of Alkyl-Capped Silicon Quantum Dots by Using Metal Nanoparticles

Authors: Khamael M. Abualnaja, Lidija Šiller, Ben R. Horrocks

Abstract:

Metal enhanced luminescence of alkyl-capped silicon quantum dots (C11-SiQDs) was obtained by mixing C11-SiQDs with silver nanoparticles (AgNPs). C11-SiQDs have been synthesized by galvanostatic method of p-Si (100) wafers followed by a thermal hydrosilation reaction of 1-undecene in refluxing toluene in order to extract alkyl-capped silicon quantum dots from porous Si. The chemical characterization of C11-SiQDs was carried out using X-ray photoemission spectroscopy (XPS). C11-SiQDs have a crystalline structure with a diameter of 5 nm. Silver nanoparticles (AgNPs) of two different sizes were synthesized also using photochemical reduction of silver nitrate with sodium dodecyl sulphate. The synthesized Ag nanoparticles have a polycrystalline structure with an average particle diameter of 100 nm and 30 nm, respectively. A significant enhancement up to 10 and 4 times in the luminescence intensities was observed for AgNPs100/C11-SiQDs and AgNPs30/C11-SiQDs mixtures, respectively using 488 nm as an excitation source. The enhancement in luminescence intensities occurs as a result of the coupling between the excitation laser light and the plasmon bands of Ag nanoparticles; thus this intense field at Ag nanoparticles surface couples strongly to C11-SiQDs. The results suggest that the larger Ag nanoparticles i.e.100 nm caused an optimum enhancement in the luminescence intensity of C11-SiQDs which reflect the strong interaction between the localized surface plasmon resonance of AgNPs and the electric field forming a strong polarization near C11-SiQDs.

Keywords: silicon quantum dots, silver nanoparticles (AgNPs), luminescence, plasmon

Procedia PDF Downloads 374
7007 1-g Shake Table Tests to Study the Impact of PGA on Foundation Settlement in Liquefiable Soil

Authors: Md. Kausar Alam, Mohammad Yazdi, Peiman Zogh, Ramin Motamed

Abstract:

The liquefaction-induced ground settlement has caused severe damage to structures in the past decades. However, the amount of building settlement caused by liquefaction is directly proportional to the intensity of the ground shaking. To reduce this soil liquefaction effect, it is essential to examine the influence of peak ground acceleration (PGA). Unfortunately, limited studies have been carried out on this issue. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada Reno to evaluate the influence of PGA with the same duration in liquefiable soil layers. The model is prepared based on a large-scale shake table with a scaling factor of N = 5, which has been conducted at the University of California, San Diego. The model ground has three soil layers with relative densities of 50% for crust, 30% for liquefiable, and 90% for dense layer, respectively. In addition, a shallow foundation is seated over an unsaturated crust layer. After preparing the model, the input motions having various peak ground accelerations (i.e., 0.16g, 0.25g, and 0.37g) for the same duration (10 sec) were applied. Based on the experimental results, when the PGA increased from 0.16g to 0.37g, the foundation increased from 20 mm to 100 mm. In addition, the expected foundation settlement based on the scaling factor was 25 mm, while the actual settlement for PGA 0.25g for 10 seconds was 50 mm.

Keywords: foundation settlement, liquefaction, peak ground acceleration, shake table test

Procedia PDF Downloads 73