Search results for: tele-seismic magnitude
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 817

Search results for: tele-seismic magnitude

757 Geoplanology Modeling and Applications Engineering of Earth in Spatial Planning Related with Geological Hazard in Cilegon, Banten, Indonesia

Authors: Muhammad L. A. Dwiyoga

Abstract:

The condition of a spatial land in the industrial park needs special attention to be studied more deeply. Geoplanology modeling can help arrange area according to his ability. This research method is to perform the analysis of remote sensing, Geographic Information System, and more comprehensive analysis to determine geological characteristics and the ability to land on the area of research and its relation to the geological disaster. Cilegon is part of Banten province located in western Java, and the direction of the north is the Strait of Borneo. While the southern part is bordering the Indian Ocean. Morphology study area is located in the highlands to low. In the highlands of identified potential landslide prone, whereas in low-lying areas of potential flooding. Moreover, in the study area has the potential prone to earthquakes, this is due to the proximity of enough research to Mount Krakatau and Subdcution Zone. From the results of this study show that the study area has a susceptibility to landslides located around the District Waringinkurung. While the region as a potential flood areas in the District of Cilegon and surrounding areas. Based on the seismic data, this area includes zones with a range of magnitude 1.5 to 5.5 magnitude at a depth of 1 to 60 Km. As for the ability of its territory, based on the analyzes and studies carried out the need for renewal of the map Spatial Plan that has been made, considering the development of a fairly rapid Cilegon area.

Keywords: geoplanology, spatial plan, geological hazard, cilegon, Indonesia

Procedia PDF Downloads 477
756 Anisotropic Behavior of Sand Stabilized with Colloidal Silica

Authors: Eleni Maria Pavlopoulou, Vasiliki N. Georgiannou, Filippos C. Chortis

Abstract:

The response of M31 sand stabilized with colloidal silica (CS) aqueous gel is investigated in the laboratory. CS is introduced in the water regime, forming a hydrosol. The low viscosity hydrosol thickens in a controllable manner to form a stable, non-toxic gel; the gel fills the pore space, retains the pore water, and supports the grain structure. The role of colloidal silica on subsequent sand behavior is examined with the aid of direct shear, triaxial, and normal compression tests. Under the examined loading modes, while the strength of the treated sand is enhanced, its stiffness may reduce, and its compressibility increase. However, in most geotechnical problems, the loading conditions are complex, involving changes in both stress magnitude and direction. Rotation of principal stresses (σ1, σ2, σ3) in varying amounts expressed as angle α, (from α=0° to 90°) in concurrence with increasing shear stress loading is commonly encountered in soil structures such as foundations, embankments, underwater slopes. To assess the influence of anisotropy on the response of sands before and after their stabilization, hollow cylinder tests were performed. The behavior of stabilized sand is compared with the characteristic sand behavior, i.e., a reduction in peak stress ratio associated with a softer stress-strain response with the increasing angle a. The influence of the magnitude of the intermediate principal stress (σ2) on the mechanical response of treated and untreated sand is also examined.

Keywords: anisotropy, colloidal silica, laboratory tests, sands, soil stabilization

Procedia PDF Downloads 110
755 A Study of a Plaque Inhibition Through Stenosed Bifurcation Artery considering a Biomagnetic Blood Flow and Elastic Walls

Authors: M. A. Anwar, K. Iqbal, M. Razzaq

Abstract:

Background and Objectives: This numerical study reflects the magnetic field's effect on the reduction of plaque formation due to stenosis in a stenosed bifurcated artery. The entire arterythe wall is assumed as linearly elastic, and blood flow is modeled as a Newtonian, viscous, steady, incompressible, laminar, biomagnetic fluid. Methods: An Arbitrary Lagrangian-Eulerian (ALE) technique is employed to formulate the hemodynamic flow in a bifurcated artery under the effect of the asymmetric magnetic field by two-way Fluid-structure interaction coupling. A stable P2P1 finite element pair is used to discretize thenonlinear system of partial differential equations. The resulting nonlinear system of algebraic equations is solved by the Newton Raphson method. Results: The numerical results for displacement, velocity magnitude, pressure, and wall shear stresses for Reynolds numbers, Re = 500, 1000, 1500, 2000, in the presence of magnetic fields are presented graphically. Conclusions: The numerical results show that the presence of the magnetic field influences the displacement and flows velocity magnitude considerably. The magnetic field reduces the flow separation, recirculation area adjacent to stenosis and gives rise to wall shear stress.

Keywords: bifurcation, elastic walls, finite element, wall shear stress,

Procedia PDF Downloads 140
754 Applications of Space Technology in Flood Risk Mapping in Parts of Haryana State, India

Authors: B. S. Chaudhary

Abstract:

The severity and frequencies of different disasters on the globe is increasing in recent years. India is also facing the disasters in the form of drought, cyclone, earthquake, landslides, and floods. One of the major causes of disasters in northern India is flood. There are great losses and extensive damage to the agricultural crops, property, human, and animal life. This is causing environmental imbalances at places. The annual global figures for losses due to floods run into over 2 billion dollar. India is a vast country with wide variations in climate and topography. Due to widespread and heavy rainfall during the monsoon months, floods of varying magnitude occur all over the country during June to September. The magnitude depends upon the intensity of rainfall, its duration and also the ground conditions at the time of rainfall. Haryana, one of the agriculturally dominated northern states is also suffering from a number of disasters such as floods, desertification, soil erosion, land degradation etc. Earthquakes are also frequently occurring but of small magnitude so are not causing much concern and damage. Most of the damage in Haryana is due to floods. Floods in Haryana have occurred in 1978, 1988, 1993, 1995, 1998, and 2010 to mention a few. The present paper deals with the Remote Sensing and GIS applications in preparing flood risk maps in parts of Haryana State India. The satellite data of various years have been used for mapping of flood affected areas. The Flooded areas have been interpreted both visually and digitally and two classes-flooded and receded water/ wet areas have been identified for each year. These have been analyzed in GIS environment to prepare the risk maps. This shows the areas of high, moderate and low risk depending on the frequency of flood witness. The floods leave a trail of suffering in the form of unhygienic conditions due to improper sanitation, water logging, filth littered in the area, degradation of materials and unsafe drinking water making the people prone to many type diseases in short and long run. Attempts have also been made to enumerate the causes of floods. The suggestions are given for mitigating the fury of floods and proper management issues related to evacuation and safe places nearby.

Keywords: flood mapping, GIS, Haryana, India, remote sensing, space technology

Procedia PDF Downloads 186
753 Design of Digital IIR Filter Using Opposition Learning and Artificial Bee Colony Algorithm

Authors: J. S. Dhillon, K. K. Dhaliwal

Abstract:

In almost all the digital filtering applications the digital infinite impulse response (IIR) filters are preferred over finite impulse response (FIR) filters because they provide much better performance, less computational cost and have smaller memory requirements for similar magnitude specifications. However, the digital IIR filters are generally multimodal with respect to the filter coefficients and therefore, reliable methods that can provide global optimal solutions are required. The artificial bee colony (ABC) algorithm is one such recently introduced meta-heuristic optimization algorithm. But in some cases it shows insufficiency while searching the solution space resulting in a weak exchange of information and hence is not able to return better solutions. To overcome this deficiency, the opposition based learning strategy is incorporated in ABC and hence a modified version called oppositional artificial bee colony (OABC) algorithm is proposed in this paper. Duplication of members is avoided during the run which also augments the exploration ability. The developed algorithm is then applied for the design of optimal and stable digital IIR filter structure where design of low-pass (LP) and high-pass (HP) filters is carried out. Fuzzy theory is applied to achieve maximize satisfaction of minimum magnitude error and stability constraints. To check the effectiveness of OABC, the results are compared with some well established filter design techniques and it is observed that in most cases OABC returns better or atleast comparable results.

Keywords: digital infinite impulse response filter, artificial bee colony optimization, opposition based learning, digital filter design, multi-parameter optimization

Procedia PDF Downloads 445
752 Effect of Malnutrition at Admission on Length of Hospital Stay among Adult Surgical Patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia: Prospective Cohort Study, 2022

Authors: Yoseph Halala Handiso, Zewdi Gebregziabher

Abstract:

Background: Malnutrition in hospitalized patients remains a major public health problem in both developed and developing countries. Despite the fact that malnourished patients are more prone to stay longer in hospital, there is limited data regarding the magnitude of malnutrition and its effect on length of stay among surgical patients in Ethiopia, while nutritional assessment is also often a neglected component of the health service practice. Objective: This study aimed to assess the prevalence of malnutrition at admission and its effect on the length of hospital stay among adult surgical patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia, 2022. Methods: A facility-based prospective cohort study was conducted among 398 adult surgical patients admitted to the hospital. Participants in the study were chosen using a convenient sampling technique. Subjective global assessment was used to determine the nutritional status of patients with a minimum stay of 24 hours within 48 hours after admission (SGA). Data were collected using the open data kit (ODK) version 2022.3.3 software, while Stata version 14.1 software was employed for statistical analysis. The Cox regression model was used to determine the effect of malnutrition on the length of hospital stay (LOS) after adjusting for several potential confounders taken at admission. Adjusted hazard ratio (HR) with a 95% confidence interval was used to show the effect of malnutrition. Results: The prevalence of hospital malnutrition at admission was 64.32% (95% CI: 59%-69%) according to the SGA classification. Adult surgical patients who were malnourished at admission had higher median LOS (12 days: 95% CI: 11-13) as compared to well-nourished patients (8 days: 95% CI: 8-9), means adult surgical patients who were malnourished at admission were at higher risk of reduced chance of discharge with improvement (prolonged LOS) (AHR: 0.37, 95% CI: 0.29-0.47) as compared to well-nourished patients. Presence of comorbidity (AHR: 0.68, 95% CI: 0.50-90), poly medication (AHR: 0.69, 95% CI: 0.55-0.86), and history of admission (AHR: 0.70, 95% CI: 0.55-0.87) within the previous five years were found to be the significant covariates of the length of hospital stay (LOS). Conclusion: The magnitude of hospital malnutrition at admission was found to be high. Malnourished patients at admission had a higher risk of prolonged length of hospital stay as compared to well-nourished patients. The presence of comorbidity, polymedication, and history of admission were found to be the significant covariates of LOS. All stakeholders should give attention to reducing the magnitude of malnutrition and its covariates to improve the burden of LOS.

Keywords: effect of malnutrition, length of hospital stay, surgical patients, Ethiopia

Procedia PDF Downloads 22
751 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 46
750 Survey of Hawke's Bay Tourism Based Businesses: Tsunami Understanding and Preparation

Authors: V. A. Ritchie

Abstract:

The loss of life and livelihood experienced after the magnitude 9.3 Sumatra earthquake and tsunami on 26 December 2004 and magnitude 9 earthquake and tsunami in northeastern Japan on 11 March 2011, has raised global awareness and brought tsunami phenomenology, nomenclature, and representation into sharp focus. At the same time, travel and tourism continue to increase, contributing around 1 in 11 jobs worldwide. This increase in tourism is especially true for coastal zones, placing pressure on decision-makers to downplay tsunami risks and at the same time provide adequate tsunami warning so that holidaymakers will feel confident enough to visit places of high tsunami risk. This study investigates how well tsunami preparedness messages are getting through for tourist-based businesses in Hawke’s Bay New Zealand, a region of frequent seismic activity and a high probability of experiencing a nearshore tsunami. The aim of this study is to investigate whether tourists based businesses are well informed about tsunamis, how well they understand that information and to what extent their clients are included in awareness raising and evacuation processes. In high-risk tsunami zones, such as Hawke’s Bay, tourism based businesses face competitive tension between short term business profitability and longer term reputational issues related to preventable loss of life from natural hazards, such as tsunamis. This study will address ways to accommodate culturally and linguistically relevant tourist awareness measures without discouraging tourists or being too costly to implement.

Keywords: tsunami risk and response, travel and tourism, business preparedness, cross cultural knowledge transfer

Procedia PDF Downloads 124
749 Determination of Alkali Treatment Conditions Effects That Influence the Variability of Kenaf Fiber Mean Cross-Sectional Area

Authors: Mohd Yussni Hashim, Mohd Nazrul Roslan, Shahruddin Mahzan Mohd Zin, Saparudin Ariffin

Abstract:

Fiber cross-sectional area value is a crucial factor in determining the strength properties of natural fiber. Furthermore, unlike synthetic fiber, a diameter and cross-sectional area of natural fiber has a large variation along and between the fibers. This study aims to determine the main and interaction effects of alkali treatment conditions that influence kenaf bast fiber mean cross-sectional area. Three alkali treatment conditions at two different levels were selected. The conditions setting were alkali concentrations at two and ten w/v %; fiber immersed temperature at room temperature and 1000C; and fiber immersed duration for 30 and 480 minute. Untreated kenaf fiber was used as a control unit. Kenaf bast fiber bundle mounting tab was prepared according to ASTM C1557-03. The cross-sectional area was measured using a Leica video analyzer. The study result showed that kenaf fiber bundle mean cross-sectional area was reduced 6.77% to 29.88% after alkali treatment. From the analysis of variance, it shows that the interaction of alkali concentration and immersed time has a higher magnitude at 0.1619 compared to alkali concentration and immersed temperature interaction that was 0.0896. For the main effect, alkali concentration factor contributes to the higher magnitude at 0.1372 which indicated the decrease pattern of variability when the level changed from lower to the higher level. Then, it was followed by immersed temperature at 0.1261 and immersed time at 0.0696 magnitudes.

Keywords: natural fiber, kenaf bast fiber bundles, alkali treatment, cross-sectional area

Procedia PDF Downloads 400
748 Investigation of Overarching Effects of Artificial Intelligence Implementation into Education Through Research Synthesis

Authors: Justin Bin

Abstract:

Artificial intelligence (AI) has been rapidly rising in usage recently, already active in the daily lives of millions, from distinguished AIs like the popular ChatGPT or Siri to more obscure, inconspicuous AIs like those used in social media or internet search engines. As upcoming generations grow immersed in emerging technology, AI will play a vital role in their development. Namely, the education sector, an influential portion of a person’s early life as a student, faces a vast ocean of possibilities concerning the implementation of AI. The main purpose of this study is to analyze the effect that AI will have on the future of the educational field. More particularly, this study delves deeper into the following three categories: school admissions, the productivity of students, and ethical concerns (role of human teachers, purpose of schooling itself, and significance of diplomas). This study synthesizes research and data on the current effects of AI on education from various published literature sources and journals, as well as estimates on further AI potential, in order to determine the main, overarching effects it will have on the future of education. For this study, a systematic organization of data in terms of type (quantitative vs. qualitative), the magnitude of effect implicated, and other similar factors were implemented within each area of significance. The results of the study suggest that AI stands to change all the beforementioned subgroups. However, its specific effects vary in magnitude and favorability (beneficial or harmful) and will be further discussed. The results discussed will reveal to those affiliated with the education field, such as teachers, counselors, or even parents of students, valuable information on not just the projected possibilities of AI in education but the effects of those changes moving forward.

Keywords: artificial intelligence, education, schools, teachers

Procedia PDF Downloads 483
747 Spectroscopic Relation between Open Cluster and Globular Cluster

Authors: Robin Singh, Mayank Nautiyal, Priyank Jain, Vatasta Koul, Vaibhav Sharma

Abstract:

The curiosity to investigate the space and its mysteries was dependably the main impetus of human interest, as the particle of livings exists from the "debut de l'Univers" (beginning of the Universe) typified with its few other living things. The sharp drive to uncover the secrets of stars and their unusual deportment was dependably an ignitor of stars investigation. As humankind lives in civilizations and states, stars likewise live in provinces named ‘clusters’. Clusters are separates into 2 composes i.e. open clusters and globular clusters. An open cluster is a gathering of thousand stars that were moulded from a comparable goliath sub-nuclear cloud and for the most part; contain Propulsion I (extremely metal-rich) and Propulsion II (mild metal-rich), where globular clusters are around gathering of more than thirty thousand stars that circles a galactic focus and basically contain Propulsion III (to a great degree metal-poor) stars. Futurology of this paper lies in the spectroscopic investigation of globular clusters like M92 and NGC419 and open clusters like M34 and IC2391 in different color bands by using software like VIREO virtual observatory, Aladin, CMUNIWIN, and MS-Excel. Assessing the outcome Hertzsprung-Russel (HR) diagram with exemplary cosmological models like Einstein model, De Sitter and Planck survey demonstrate for a superior age estimation of respective clusters. Colour-Magnitude Diagram of these clusters was obtained by photometric analysis in g and r bands which further transformed into BV bands which will unravel the idea of stars exhibit in the individual clusters.

Keywords: color magnitude diagram, globular clusters, open clusters, Einstein model

Procedia PDF Downloads 200
746 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach

Authors: Ju-Hong Lee, Yi-Lin Shieh

Abstract:

Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.

Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm

Procedia PDF Downloads 207
745 Seismic Behavior of Existing Reinforced Concrete Buildings in California under Mainshock-Aftershock Scenarios

Authors: Ahmed Mantawy, James C. Anderson

Abstract:

Numerous cases of earthquakes (main-shocks) that were followed by aftershocks have been recorded in California. In 1992 a pair of strong earthquakes occurred within three hours of each other in Southern California. The first shock occurred near the community of Landers and was assigned a magnitude of 7.3 then the second shock occurred near the city of Big Bear about 20 miles west of the initial shock and was assigned a magnitude of 6.2. In the same year, a series of three earthquakes occurred over two days in the Cape-Mendocino area of Northern California. The main-shock was assigned a magnitude of 7.0 while the second and the third shocks were both assigned a value of 6.6. This paper investigates the effect of a main-shock accompanied with aftershocks of significant intensity on reinforced concrete (RC) frame buildings to indicate nonlinear behavior using PERFORM-3D software. A 6-story building in San Bruno and a 20-story building in North Hollywood were selected for the study as both of them have RC moment resisting frame systems. The buildings are also instrumented at multiple floor levels as a part of the California Strong Motion Instrumentation Program (CSMIP). Both buildings have recorded responses during past events such as Loma-Prieta and Northridge earthquakes which were used in verifying the response parameters of the numerical models in PERFORM-3D. The verification of the numerical models shows good agreement between the calculated and the recorded response values. Then, different scenarios of a main-shock followed by a series of aftershocks from real cases in California were applied to the building models in order to investigate the structural behavior of the moment-resisting frame system. The behavior was evaluated in terms of the lateral floor displacements, the ductility demands, and the inelastic behavior at critical locations. The analysis results showed that permanent displacements may have happened due to the plastic deformation during the main-shock that can lead to higher displacements during after-shocks. Also, the inelastic response at plastic hinges during the main-shock can change the hysteretic behavior during the aftershocks. Higher ductility demands can also occur when buildings are subjected to trains of ground motions compared to the case of individual ground motions. A general conclusion is that the occurrence of aftershocks following an earthquake can lead to increased damage within the elements of an RC frame buildings. Current code provisions for seismic design do not consider the probability of significant aftershocks when designing a new building in zones of high seismic activity.

Keywords: reinforced concrete, existing buildings, aftershocks, damage accumulation

Procedia PDF Downloads 258
744 Finite Element Modeling of the Effects of Loss of Rigid Pavements Slab Support Due to Built-In Curling

Authors: Ali Ashtiani, Cesar Carrasco

Abstract:

Accurate determination of thermo-mechanical responses of jointed concrete pavement slabs is essential to implement an effective mechanistic design. Temperature-induced curling of concrete slabs can produce premature top-down cracking in rigid pavements. Curling of concrete slabs can result from daily temperature variation through the slab thickness. The slab curling can also result from temperature gradients due hot weather construction, drying shrinkage and creep that are permanently built into the slabs. The existence of permanent curling implies that concrete slabs are not flat at zero temperature gradient. In this case, slabs may not be in full contact with the underlying base layer when subjecting to traffic. Built-in curling can be a major factor producing loss of slab support. The magnitude of stresses induced in slabs is influenced by the stiffness of the underlying foundation layers and the contact condition along the slab-foundation interface. An approach for finite element modeling of the effect of loss of slab support due to built-in curling is presented in this paper. A series of parametric studies is carried out for a pavement system loaded with a combination of traffic and thermal loads, considering different built-in curling and different foundation rigidities. The results explain the effect of loss of support in the magnitude of stresses produced in concrete slabs. The results of parametric study can also be used to evaluate whether the governing equations that are used to idealize the behavior of jointed concrete pavements and the effect of loss of support have been accurately selected and implemented in the finite element model.

Keywords: built-in curling, finite element modeling, loss of slab support, rigid pavement

Procedia PDF Downloads 126
743 Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers

Authors: Eoin Kirwan, Christopher Nulty, Declan Browne

Abstract:

The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.

Keywords: Conjested fixtures, fatigue monitoring, ice hockey, readiness

Procedia PDF Downloads 110
742 Analysis and Quantification of Historical Drought for Basin Wide Drought Preparedness

Authors: Joo-Heon Lee, Ho-Won Jang, Hyung-Won Cho, Tae-Woong Kim

Abstract:

Drought is a recurrent climatic feature that occurs in virtually every climatic zone around the world. Korea experiences the drought almost every year at the regional scale mainly during in the winter and spring seasons. Moreover, extremely severe droughts at a national scale also occurred at a frequency of six to seven years. Various drought indices had developed as tools to quantitatively monitor different types of droughts and are utilized in the field of drought analysis. Since drought is closely related with climatological and topographic characteristics of the drought prone areas, the basins where droughts are frequently occurred need separate drought preparedness and contingency plans. In this study, an analysis using statistical methods was carried out for the historical droughts occurred in the five major river basins in Korea so that drought characteristics can be quantitatively investigated. It was also aimed to provide information with which differentiated and customized drought preparedness plans can be established based on the basin level analysis results. Conventional methods which quantifies drought execute an evaluation by applying a various drought indices. However, the evaluation results for same drought event are different according to different analysis technique. Especially, evaluation of drought event differs depend on how we view the severity or duration of drought in the evaluation process. Therefore, it was intended to draw a drought history for the most severely affected five major river basins of Korea by investigating a magnitude of drought that can simultaneously consider severity, duration, and the damaged areas by applying drought run theory with the use of SPI (Standardized Precipitation Index) that can efficiently quantifies meteorological drought. Further, quantitative analysis for the historical extreme drought at various viewpoints such as average severity, duration, and magnitude of drought was attempted. At the same time, it was intended to quantitatively analyze the historical drought events by estimating the return period by derived SDF (severity-duration-frequency) curve for the five major river basins through parametric regional drought frequency analysis. Analysis results showed that the extremely severe drought years were in the years of 1962, 1988, 1994, and 2014 in the Han River basin. While, the extreme droughts were occurred in 1982 and 1988 in the Nakdong river basin, 1994 in the Geumg basin, 1988 and 1994 in Youngsan river basin, 1988, 1994, 1995, and 2000 in the Seomjin river basin. While, the extremely severe drought years at national level in the Korean Peninsula were occurred in 1988 and 1994. The most damaged drought were in 1981~1982 and 1994~1995 which lasted for longer than two years. The return period of the most severe drought at each river basin was turned out to be at a frequency of 50~100 years.

Keywords: drought magnitude, regional frequency analysis, SPI, SDF(severity-duration-frequency) curve

Procedia PDF Downloads 375
741 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 29
740 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing

Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti

Abstract:

Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.

Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis

Procedia PDF Downloads 108
739 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training

Authors: Biki Sarmah, Priyanko Raj Mudiar

Abstract:

In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.

Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator

Procedia PDF Downloads 126
738 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study

Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic

Abstract:

Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.

Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS

Procedia PDF Downloads 279
737 Non-Linear Velocity Fields in Turbulent Wave Boundary Layer

Authors: Shamsul Chowdhury

Abstract:

The objective of this paper is to present the detailed analysis of the turbulent wave boundary layer produced by progressive finite-amplitude waves theory. Most of the works have done for the mass transport in the turbulent boundary layer assuming the eddy viscosity is not time varying, where the sediment movement is induced by the mean velocity. Near the ocean bottom, the waves produce a thin turbulent boundary layer, where the flow is highly rotational, and shear stress associated with the fluid motion cannot be neglected. The magnitude and the predominant direction of the sediment transport near the bottom are known to be closely related to the flow in the wave induced boundary layer. The magnitude of water particle velocity at the Crest phase differs from the one of the Trough phases due to the non-linearity of the waves, which plays an important role to determine the sediment movement. The non-linearity of the waves become predominant in the surf zone area, where the sediment movement occurs vigorously. Therefore, in order to describe the flow near the bottom and relationship between the flow and the movement of the sediment, the analysis was done using the non-linear boundary layer equation and the finite amplitude wave theory was applied to represent the velocity fields in the turbulent wave boundary layer. At first, the calculation was done for turbulent wave boundary layer by two-dimensional model where throughout the calculation is non-linear. But Stokes second order wave profile is adopted at the upper boundary. The calculated profile was compared with the experimental data. Finally, the calculation is done based on various modes of the velocity and turbulent energy. The mean velocity is found to differ from condition of the relative depth and the roughness. It is also found that due to non-linearity, the absolute value for velocity and turbulent energy as well as Reynolds stress are asymmetric. The mean velocity of the laminar boundary layer is always positive but in the turbulent boundary layer plays a very complicated role.

Keywords: wave boundary, mass transport, mean velocity, shear stress

Procedia PDF Downloads 237
736 Development of Electronic Waste Management Framework at College of Design Art, Design and Technology

Authors: Wafula Simon Peter, Kimuli Nabayego Ibtihal, Nabaggala Kimuli Nashua

Abstract:

The worldwide use of information and communications technology (ICT) equipment and other electronic equipment is growing and consequently, there is a growing amount of equipment that becomes waste after its time in use. This growth is expected to accelerate since equipment lifetime decreases with time and growing consumption. As a result, e-waste is one of the fastest-growing waste streams globally. The United Nations University (UNU) calculates in its second Global E-waste Monitor 44.7 million metric tonnes (Mt) of e-waste were generated globally in 2016. The study population was 80 respondents, from which a sample of 69 respondents was selected using simple and purposive sampling techniques. This research was carried out to investigate the problem of e-waste and come up with a framework to improve e-waste management. The objective of the study was to develop a framework for improving e-waste management at the College of Engineering, Design, Art and Technology (CEDAT). This was achieved by breaking it down into specific objectives, and these included the establishment of the policy and other Regulatory frameworks being used in e-waste management at CEDAT, the determination of the effectiveness of the e-waste management practices at CEDAT, the establishment of the critical challenges constraining e-waste management at the College, development of a framework for e-waste management. The study reviewed the e-waste regulatory framework used at the college and then collected data which was used to come up with a framework. The study also established that weak policy and regulatory framework, lack of proper infrastructure, improper disposal of e-waste and a general lack of awareness of the e-waste and the magnitude of the problem are the critical challenges of e-waste management. In conclusion, the policy and regulatory framework should be revised, localized and strengthened to contextually address the problem. Awareness campaigns, the development of proper infrastructure and extensive research to establish the volumes and magnitude of the problems will come in handy. The study recommends a framework for the improvement of e-waste.

Keywords: e-waste, treatment, disposal, computers, model, management policy and guidelines

Procedia PDF Downloads 52
735 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 294
734 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 43
733 Effect of Duration and Frequency on Ground Motion: Case Study of Guwahati City

Authors: Amar F. Siddique

Abstract:

The Guwahati city is one of the fastest growing cities of the north-eastern region of India, situated on the South Bank of the Brahmaputra River falls in the highest seismic zone level V. The city has witnessed many high magnitude earthquakes in the past decades. The Assam earthquake occurred on August 15, 1950, of moment magnitude 8.7 epicentered near Rima, Tibet was one of the major earthquakes which caused a serious structural damage and widespread soil liquefaction in and around the region. Hence the study of ground motion characteristics of Guwahati city is very essential. In this present work 1D equivalent linear ground response analysis (GRA) has been adopted using Deep soil software. The analysis has been done for two typical sites namely, Panbazar and Azara comprising total four boreholes location in Guwahati city of India. GRA of the sites is carried out by using an input motion recorded at Nongpoh station (recorded PGA 0.048g) and Nongstoin station (recorded PGA 0.047g) of 1997 Indo-Burma earthquake. In comparison to motion recorded at Nongpoh, different amplifications of bedrock peak ground acceleration (PGA) are obtained for all the boreholes by the motion recorded at Nongstoin station; although, the Fourier amplitude ratios (FAR) and fundamental frequencies remain almost same. The difference in recorded duration and frequency content of the two motions mainly influence the amplification of motions thus getting different surface PGA and amplification factor keeping a constant bedrock PGA. From the results of response spectra, it is found that at the period of less than 0.2 sec the ground motion recorded at Nongpoh station will give a high spectral acceleration (SA) on the structures than at Nongstoin station. Again for a period greater than 0.2 sec the ground motion recorded at Nongstoin station will give a high SA on the structures than at Nongpoh station.

Keywords: fourier amplitude ratio, ground response analysis, peak ground acceleration, spectral acceleration

Procedia PDF Downloads 152
732 Software Engineering Revolution Driven by Complexity Science

Authors: Jay Xiong, Li Lin

Abstract:

This paper introduces a new software engineering paradigm based on complexity science, called NSE (Nonlinear Software Engineering paradigm). The purpose of establishing NSE is to help software development organizations double their productivity, half their cost, and increase the quality of their products in several orders of magnitude simultaneously. NSE complies with the essential principles of complexity science. NSE brings revolutionary changes to almost all aspects in software engineering. NSE has been fully implemented with its support platform Panorama++.

Keywords: complexity science, software development, software engineering, software maintenance

Procedia PDF Downloads 240
731 Finite Element-Based Stability Analysis of Roadside Settlements Slopes from Barpak to Yamagaun through Laprak Village of Gorkha, an Epicentral Location after the 7.8Mw 2015 Barpak, Gorkha, Nepal Earthquake

Authors: N. P. Bhandary, R. C. Tiwari, R. Yatabe

Abstract:

The research employs finite element method to evaluate the stability of roadside settlements slopes from Barpak to Yamagaon through Laprak village of Gorkha, Nepal after the 7.8Mw 2015 Barpak, Gorkha, Nepal earthquake. It includes three major villages of Gorkha, i.e., Barpak, Laprak and Yamagaun that were devastated by 2015 Gorkhas’ earthquake. The road head distance from the Barpak to Laprak and Laprak to Yamagaun are about 14 and 29km respectively. The epicentral distance of main shock of magnitude 7.8 and aftershock of magnitude 6.6 were respectively 7 and 11 kilometers (South-East) far from the Barpak village nearer to Laprak and Yamagaon. It is also believed that the epicenter of the main shock as said until now was not in the Barpak village, it was somewhere near to the Yamagaun village. The chaos that they had experienced during the earthquake in the Yamagaun was much more higher than the Barpak. In this context, we have carried out a detailed study to investigate the stability of Yamagaun settlements slope as a case study, where ground fissures, ground settlement, multiple cracks and toe failures are the most severe. In this regard, the stability issues of existing settlements and proposed road alignment, on the Yamagaon village slope are addressed, which is surrounded by many newly activated landslides. Looking at the importance of this issue, field survey is carried out to understand the behavior of ground fissures and multiple failure characteristics of the slopes. The results suggest that the Yamgaun slope in Profile 2-2, 3-3 and 4-4 are not safe enough for infrastructure development even in the normal soil slope conditions as per 2, 3 and 4 material models; however, the slope seems quite safe for at Profile 1-1 for all 4 material models. The result also indicates that the first three profiles are marginally safe for 2, 3 and 4 material models respectively. The Profile 4-4 is not safe enough for all 4 material models. Thus, Profile 4-4 needs a special care to make the slope stable.

Keywords: earthquake, finite element method, landslide, stability

Procedia PDF Downloads 317
730 Non Performing Asset Variations across Indian Commercial Banks: Some Findings

Authors: Sanskriti Singh, Ankit Tomar

Abstract:

Banks are the instrument of growth of a country. Banks mobilize the savings of the public in the form of deposits and channelize it as advances for various activities required for the development of society at large. The advance which becomes unpaid for a certain period is called Non Performing Asset of the bank. The study makes an attempt to bring out the magnitude of NPA and its impact on profit, advances. An attempt is also made to bring out the challenges NPA poses to the banks and suggestions to overcome and to manage NPA effectively.

Keywords: India, NPAs, private banks, public banks

Procedia PDF Downloads 258
729 Pattern of Anisometropia, Management and Outcome of Anisometropic Amblyopia

Authors: Husain Rajib, T. H. Sheikh, D. G. Jewel

Abstract:

Background: Amblyopia is a frequent cause of monocular blindness in children. It can be unilateral or bilateral reduction of best corrected visual acuity associated with decrement in visual processing, accomodation, motility, spatial perception or spatial projection. Anisometropia is an important risk factor for amblyopia that develops when unequal refractive error causes the image to be blurred in the critical developmental period and central inhibition of the visual signal originating from the affected eye associated with significant visual problems including anisokonia, strabismus, and reduced stereopsis. Methods: It is a prospective hospital based study of newly diagnosed of amblyopia seen at the pediatric clinic of Chittagong Eye Infirmary & Training Complex. There were 50 anisometropic amblyopia subjects were examined & questionnaire was piloted. Included were all patients diagnosed with refractive amblyopia between 3 to 13 years, without previous amblyopia treatment, and whose parents were interested to participate in the study. Patients diagnosed with strabismic amblyopia were excluded. Patients were first corrected with the best correction for a month. When the VA in the amblyopic eye did not improve over month, then occlusion treatment was started. Occlusion was done daily for 6-8 hours (full time) together with vision therapy. The occlusion was carried out for 3 months. Results: In this study about 8% subjects had anisometropia from myopia, 18% from hyperopia, 74% from astigmatism. The initial mean visual acuity was 0.74 ± 0.39 Log MAR and after intervention of amblyopia therapy with active vision therapy mean visual acuity was 0.34 ± 0.26 Log MAR. About 94% of subjects were improving at least two lines. The depth of amblyopia associated with type of anisometropic refractive error and magnitude of Anisometropia (p<0.005). By doing this study 10% mild amblyopia, 64% moderate and 26% severe amblyopia were found. Binocular function also decreases with magnitude of Anisometropia. Conclusion: Anisometropic amblyopia is a most important factor in pediatric age group because it can lead to visual impairment. Occlusion therapy with at least one instructed hour of active visual activity practiced out of school hours was effective in anisometropic amblyopes who were diagnosed at the age of 8 years and older, and the patients complied well with the treatment.

Keywords: refractive error, anisometropia, amblyopia, strabismic amblyopia

Procedia PDF Downloads 251
728 FEM Simulation of Tool Wear and Edge Radius Effects on Residual Stress in High Speed Machining of Inconel718

Authors: Yang Liu, Mathias Agmell, Aylin Ahadi, Jan-Eric Stahl, Jinming Zhou

Abstract:

Tool wear and tool geometry have significant effects on the residual stresses in the component produced by high-speed machining. In this paper, Coupled Eulerian and Lagrangian (CEL) model is adopted to investigate the residual stress in high-speed machining of Inconel718 with a CBN170 cutting tool. The result shows that the mesh with the smallest size of 5 um yields cutting forces and chip morphology in close agreement with the experimental data. The analysis of thermal loading and mechanical loading are performed to study the effect of segmented chip morphology on the machined surface topography and residual stress distribution. The effects of cutting edge radius and flank wear on residual stresses formation and distribution on the workpiece were also investigated. It is found that the temperature within 100um depth of the machined surface increases drastically due to the more friction heat generation with the contact area of tool and workpiece increasing when a larger edge radius and flank wear are used. With the depth further increasing, the temperature drops rapidly for all cases due to the low conductivity of Inconel718. Consequently, higher and deeper tensile residual stress is generated on the superficial. Furthermore, an increased depth of plastic deformation and compressive residual stress is noticed in the subsurface, which is attributed to the reduction of the yield strength under the thermal effect. Besides, the ploughing effect produced by a larger tool edge radius contributes more than flank wear. The magnitude variation of the compressive residual stress caused by various edge radius and flank wear have a totally opposite trend, which depends on the magnitude of the ploughing and friction pressure acting on the machined surface.

Keywords: Coupled Eulerian Lagrangian, segmented chip, residual stress, tool wear, edge radius, Inconel718

Procedia PDF Downloads 117