Search results for: kernel principal component analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29290

Search results for: kernel principal component analysis

28060 Analysis of Automotive Sensor for Engine Knock System

Authors: Miroslav Gutten, Jozef Jurcik, Daniel Korenciak, Milan Sebok, Matej Kuceraa

Abstract:

This paper deals with the phenomenon of the undesirable detonation combustion in internal combustion engines. A control unit of the engine monitors these detonations using piezoelectric knock sensors. With the control of these sensors the detonations can be objectively measured just outside the car. If this component provides small amplitude of the output voltage it could happen that there would have been in the areas of the engine ignition combustion. The paper deals with the design of a simple device for the detection of this disorder. A construction of the testing device for the knock sensor suitable for diagnostics of knock combustion in internal combustion engines will be presented. The output signal of presented sensor will be described by Bessel functions. Using the first voltage extremes on the characteristics it is possible to create a reference for the evaluation of the polynomial residue. It should be taken into account that the velocity of sound in air is 330 m/s. This sound impinges on the walls of the combustion chamber and is detected by the sensor. The resonant frequency of the clicking of the motor is usually in the range from 5 kHz to 15 kHz. The sensor worked in the field to 37 kHz, which shall be taken into account on an own sensor resonance.

Keywords: diagnostics, knock sensor, measurement, testing device

Procedia PDF Downloads 441
28059 Estimation of Eucalyptus Wood Calorific Potential for Energy Recovering

Authors: N. Ouslimani, N. Hakimi, H. Aksas

Abstract:

The reduction of oil reserves in the world makes that many countries are directed towards the study and the use of local and renewable energies. For this purpose, wood energy represents the material of choice. The energy production is primarily thermal and corresponds to a heating of comfort, auxiliary or principal. Wood is generally conditioned in the form of logs, of pellets, even of plates. In Algeria, this way of energy saving could contribute to the safeguarding of the environment, as to the recovery of under wood products (branches, barks and various wastes on the various transformation steps). This work is placed within the framework general of the search for new sources of energy starting from the recovery of the lignocellulosic matter. In this direction, we proposed various sources of products (biomass, under product and by-products) relating to the ‘Eucalyptus species’ being able to be developed, of which we carried out a preliminary physicochemical study, necessary to the development of the densified products with high calorific value.

Keywords: biomass, calorific value, combustion, energy recovery

Procedia PDF Downloads 280
28058 Theoretical Approach and Proof of Concept Implementation of Adaptive Partition Scheduling Module for Linux

Authors: Desislav Andreev, Veselin Stanev

Abstract:

Linux operating system continues to gain popularity with every passed year. This is due to its open-source license and a great number of distributions, covering users’ needs. At first glance it seems that Linux can be integrated in every type of systems – it is already present in personal computers, smartphones and even in some embedded systems like Raspberry Pi. However, Linux still does not meet the performance and security requirements to run effectively on a real-time system. Real-time systems are very time-restricted – their processes have to execute and finish at strict time intervals. The Completely Fair Scheduler present in Linux does not have such scheduling capabilities and it is not able to ensure that critical-time processes will execute on time. One of the ways to solve this problem is implementing an Adaptive Partition Scheduler solution similar to that present in QNX Neutrino operating system. This type of scheduling divides the CPU in multiple adaptive partitions where each partition holds a percentage of CPU usage called budget, which allows optimal usage of the CPU resources and also provides protection against cyber attacks such as Denial of Service. This approach will also benefit systems, where functional safety is highly demanded, such as the instrumental clusters in the Automotive industry. The purpose of this paper is to present a concept of Adaptive Partition Scheduler designed for Linux-based operating systems.

Keywords: adaptive partitions, Linux kernel modules, real-time systems, scheduling

Procedia PDF Downloads 94
28057 The Role of Environmental Analysis in Managing Knowledge in Small and Medium Sized Enterprises

Authors: Liu Yao, B. T. Wan Maseri, Wan Mohd, B. T. Nurul Izzah, Mohd Shah, Wei Wei

Abstract:

Effectively managing knowledge has become a vital weapon for businesses to survive or to succeed in the increasingly competitive market. But do they perform environmental analysis when managing knowledge? If yes, how is the level and significance? This paper established a conceptual framework covering the basic knowledge management activities (KMA) to examine their contribution towards organizational performance (OP). Environmental analysis (EA) was then investigated from both internal and external aspects, to identify its effects on that contribution. Data was collected from 400 Chinese SMEs by questionnaires. Cronbach's α and factor analysis were conducted. Regression results show that the external analysis presents higher level than internal analysis. However, the internal analysis mediates the effects of external analysis on the KMA-OP relation and plays more significant role in the relation comparing with the external analysis. Thus, firms shall improve environmental analysis especially the internal analysis to enhance their KM practices.

Keywords: knowledge management, environmental analysis, performance, mediating, small sized enterprises, medium sized enterprises

Procedia PDF Downloads 608
28056 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification

Authors: Rujia Chen, Ajit Narayanan

Abstract:

Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.

Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels

Procedia PDF Downloads 180
28055 Reaching the Goals of Routine HIV Screening Programs: Quantifying and Implementing an Effective HIV Screening System in Northern Nigeria Facilities Based on Optimal Volume Analysis

Authors: Folajinmi Oluwasina, Towolawi Adetayo, Kate Ssamula, Penninah Iutung, Daniel Reijer

Abstract:

Objective: Routine HIV screening has been promoted as an essential component of efforts to reduce incidence, morbidity, and mortality. The objectives of this study were to identify the optimal annual volume needed to realize the public health goals of HIV screening in the AIDS Healthcare Foundation supported hospitals and establish an implementation process to realize that optimal annual volume. Methods: Starting in 2011 a program was established to routinize HIV screening within communities and government hospitals. In 2016 Five-years of HIV screening data were reviewed to identify the optimal annual proportions of age-eligible patients screened to realize the public health goals of reducing new diagnoses and ending late-stage diagnosis (tracked as concurrent HIV/AIDS diagnosis). Analysis demonstrated that rates of new diagnoses level off when 42% of age-eligible patients were screened, providing a baseline for routine screening efforts; and concurrent HIV/AIDS diagnoses reached statistical zero at screening rates of 70%. Annual facility based targets were re-structured to meet these new target volumes. Restructuring efforts focused on right-sizing HIV screening programs to align and transition programs to integrated HIV screening within standard medical care and treatment. Results: Over one million patients were screened for HIV during the five years; 16, 033 new HIV diagnoses and access to care and treatment made successfully for 82 % (13,206), and concurrent diagnosis rates went from 32.26% to 25.27%. While screening rates increased by 104.7% over the 5-years, volume analysis demonstrated that rates need to further increase by 62.52% to reach desired 20% baseline and more than double to reach optimal annual screening volume. In 2011 facility targets for HIV screening were increased to reflect volume analysis, and in that third year, 12 of the 19 facilities reached or exceeded new baseline targets. Conclusions and Recommendation: Quantifying targets against routine HIV screening goals identified optimal annual screening volume and allowed facilities to scale their program size and allocate resources accordingly. The program transitioned from utilizing non-evidence based annual volume increases to establishing annual targets based on optimal volume analysis. This has allowed efforts to be evaluated on the ability to realize quantified goals related to the public health value of HIV screening. Optimal volume analysis helps to determine the size of an HIV screening program. It is a public health tool, not a tool to determine if an individual patient should receive screening.

Keywords: HIV screening, optimal volume, HIV diagnosis, routine

Procedia PDF Downloads 258
28054 Comparative Analysis of Automation Testing Tools

Authors: Amit Bhanushali

Abstract:

In the ever-changing landscape of software development, automated software testing has emerged as a critical component of the Software Development Life Cycle (SDLC). This research undertakes a comparative study of three major automated testing tools -UFT, Selenium, and RPA- evaluating them on usability, maintenance, and effectiveness. Leveraging existing JAVA-based applications as test cases, the study aims to guide testers in selecting the optimal tool for specific applications. By exploring key features such as source and licensing, testing expenses, object repositories, usability, and language support, the research provides practical insights into UFT, Selenium, and RPA. Acknowledging the pivotal role of these tools in streamlining testing processes amid time constraints and resource limitations, the study assists professionals in making informed choices aligned with their organizational needs.

Keywords: software testing tools, software development lifecycle (SDLC), test automation frameworks, automated software, JAVA-based, UFT, selenium and RPA (robotic process automation), source and licensing, object repository

Procedia PDF Downloads 87
28053 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 190
28052 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 235
28051 Thermal Simulation for Urban Planning in Early Design Phases

Authors: Diego A. Romero Espinosa

Abstract:

Thermal simulations are used to evaluate comfort and energy consumption of buildings. However, the performance of different urban forms cannot be assessed precisely if an environmental control system and user schedules are considered. The outcome of such analysis would lead to conclusions that combine the building use, operation, services, envelope, orientation and density of the urban fabric. The influence of these factors varies during the life cycle of a building. The orientation, as well as the surroundings, can be considered a constant during the lifetime of a building. The structure impacts the thermal inertia and has the largest lifespan of all the building components. On the other hand, the building envelope is the most frequent renovated component of a building since it has a great impact on energy performance and comfort. Building services have a shorter lifespan and are replaced regularly. With the purpose of addressing the performance, an urban form, a specific orientation, and density, a thermal simulation method were developed. The solar irradiation is taken into consideration depending on the outdoor temperature. Incoming irradiation at low temperatures has a positive impact increasing the indoor temperature. Consequently, overheating would be the combination of high outdoor temperature and high irradiation at the façade. On this basis, the indoor temperature is simulated for a specific orientation of the evaluated urban form. Thermal inertia and building envelope performance are considered additionally as the materiality of the building. The results of different thermal zones are summarized using the 'Degree day method' for cooling and heating. During the early phase of a design process for a project, such as Masterplan, conclusions regarding urban form, density and materiality can be drawn by means of this analysis.

Keywords: building envelope, density, masterplanning, urban form

Procedia PDF Downloads 138
28050 Forecasting Unemployment Rate in Selected European Countries Using Smoothing Methods

Authors: Ksenija Dumičić, Anita Čeh Časni, Berislav Žmuk

Abstract:

The aim of this paper is to select the most accurate forecasting method for predicting the future values of the unemployment rate in selected European countries. In order to do so, several forecasting techniques adequate for forecasting time series with trend component, were selected, namely: double exponential smoothing (also known as Holt`s method) and Holt-Winters` method which accounts for trend and seasonality. The results of the empirical analysis showed that the optimal model for forecasting unemployment rate in Greece was Holt-Winters` additive method. In the case of Spain, according to MAPE, the optimal model was double exponential smoothing model. Furthermore, for Croatia and Italy the best forecasting model for unemployment rate was Holt-Winters` multiplicative model, whereas in the case of Portugal the best model to forecast unemployment rate was Double exponential smoothing model. Our findings are in line with European Commission unemployment rate estimates.

Keywords: European Union countries, exponential smoothing methods, forecast accuracy unemployment rate

Procedia PDF Downloads 365
28049 Transformative Leadership and Learning Management Systems Implementation: Leadership Practices in Instructional Design for Online Learning

Authors: Felix Brito

Abstract:

With the growth of online learning, several higher education institutions have attempted to incorporate technology in their curriculum. Successful technology implementation projects really on technology infrastructure and on the acceptance of education professionals towards innovation. This research study is aimed at illustrating the relevance of the human component in technology implementation projects in higher education by describing the Learning Management System implementation project executed by instructional designers working for a higher education institution in the southeast region of the United States. An analysis of the Transformative Leadership Theory, the Technology Acceptance Model, and the Diffusion of Innovation Process provide the support for a solid understanding of this issue and address recommendations for future technology implementation projects in higher education institutions.

Keywords: diffusion of innovation process, instructional design, leadership, learning management systems, online learning, technology acceptance model, transformative leadership theory

Procedia PDF Downloads 319
28048 Time and Cost Efficiency Analysis of Quick Die Change System on Metal Stamping Industry

Authors: Rudi Kurniawan Arief

Abstract:

Manufacturing cost and setup time are the hot topics to improve in Metal Stamping industry because material and components price are always rising up while costumer requires to cut down the component price year by year. The Single Minute Exchange of Die (SMED) is one of many methods to reduce waste in stamping industry. The Japanese Quick Die Change (QDC) dies system is one of SMED systems that could reduce both of setup time and manufacturing cost. However, this system is rarely used in stamping industries. This paper will analyze how deep the QDC dies system could reduce setup time and the manufacturing cost. The research is conducted by direct observation, simulating and comparing of QDC dies system with conventional dies system. In this research, we found that the QDC dies system could save up to 35% of manufacturing cost and reduce 70% of setup times. This simulation proved that the QDC die system is effective for cost reduction but must be applied in several parallel production processes.

Keywords: press die, metal stamping, QDC system, single minute exchange die, manufacturing cost saving, SMED

Procedia PDF Downloads 165
28047 Theoretical Analysis of Mechanical Vibration for Offshore Platform Structures

Authors: Saeed Asiri, Yousuf Z. AL-Zahrani

Abstract:

A new class of support structures, called periodic structures, is introduced in this paper as a viable means for isolating the vibration transmitted from the sea waves to offshore platform structures through its legs. A passive approach to reduce transmitted vibration generated by waves is presented. The approach utilizes the property of periodic structural components that creates stop and pass bands. The stop band regions can be tailored to correspond to regions of the frequency spectra that contain harmonics of the wave frequency, attenuating the response in those regions. A periodic structural component is comprised of a repeating array of cells, which are themselves an assembly of elements. The elements may have differing material properties as well as geometric variations. For the purpose of this research, only geometric and material variations are considered and each cell is assumed to be identical. A periodic leg is designed in order to reduce transmitted vibration of sea waves. The effectiveness of the periodicity on the vibration levels of platform will be demonstrated theoretically. The theory governing the operation of this class of periodic structures is introduced using the transfer matrix method. The unique filtering characteristics of periodic structures are demonstrated as functions of their design parameters for structures with geometrical and material discontinuities; and determine the propagation factor by using the spectral finite element analysis and the effectiveness of design on the leg structure by changing the ratio of step length and area interface between the materials is demonstrated in order to find the propagation factor and frequency response.

Keywords: vibrations, periodic structures, offshore, platforms, transfer matrix method

Procedia PDF Downloads 283
28046 Physico-Chemical Characterization of Vegetable Oils from Oleaginous Seeds (Croton megalocarpus, Ricinus communis L., and Gossypium hirsutum L.)

Authors: Patrizia Firmani, Sara Perucchini, Irene Rapone, Raffella Borrelli, Stefano Chiaberge, Manuela Grande, Rosamaria Marrazzo, Alberto Savoini, Andrea Siviero, Silvia Spera, Fabio Vago, Davide Deriu, Sergio Fanutti, Alessandro Oldani

Abstract:

According to the Renewable Energy Directive II, the use of palm oil in diesel will be gradually reduced from 2023 and should reach zero in 2030 due to the deforestation caused by its production. Eni aims at finding alternative feedstocks for its biorefineries to eliminate the use of palm oil by 2023. Therefore, the ideal vegetable oils to be used in bio-refineries are those obtainable from plants that grow in marginal lands and with low impact on food-and-feed chain; hence, Eni research is studying the possibility of using oleaginous seeds, such as castor, croton, and cotton, to extract the oils to be exploited as feedstock in bio-refineries. To verify their suitability for the upgrading processes, an analytical protocol for their characterization has been drawn up and applied. The analytical characterizations include a step of water and ashes content determination, elemental analysis (CHNS analysis, X-Ray Fluorescence, Inductively Coupled Plasma - Optical Emission Spectroscopy, ICP– Mass Spectrometry), and total acid number determination. Gas chromatography coupled to flame ionization detector (GC-FID) is used to quantify the lipid content in terms of free fatty acids, mono-, di- and triacylglycerols, and fatty acids composition. Eventually, Nuclear Magnetic Resonance and Fourier Transform-Infrared spectroscopies are exploited with GC-MS and Fourier Transform-Ion Cyclotron Resonance to study the composition of the oils. This work focuses on the GC-FID analysis of the lipid fraction of these oils, as the main constituent and of greatest interest for bio-refinery processes. Specifically, the lipid component of the extracted oil was quantified after sample silanization and transmethylation: silanization allows the elution of high-boiling compounds and is useful for determining the quantity of free acids and glycerides in oils, while transmethylation leads to a mixture of fatty acid esters and glycerol, thus allowing to evaluate the composition of glycerides in terms of Fatty Acids Methyl Esters (FAME). Cotton oil was extracted from cotton oilcake, croton oil was obtained by seeds pressing and seeds and oilcake ASE extraction, while castor oil comes from seed pressing (not performed in Eni laboratories). GC-FID analyses reported that the cotton oil is 90% constituted of triglycerides and about 6% diglycerides, while free fatty acids are about 2%. In terms of FAME, C18 acids make up 70% of the total and linoleic acid is the major constituent. Palmitic acid is present at 17.5%, while the other acids are in low concentration (<1%). Both analyzes show the presence of non-gas chromatographable compounds. Croton oils from seed pressing and extraction mainly contain triglycerides (98%). Concerning FAME, the main component is linoleic acid (approx. 80%). Oilcake croton oil shows higher abundance of diglycerides (6% vs ca 2%) and a lower content of triglycerides (38% vs 98%) compared to the previous oils. Eventually, castor oil is mostly constituted of triacylglycerols (about 69%), followed by diglycerides (about 10%). About 85.2% of total FAME is ricinoleic acid, as a constituent of triricinolein, the most abundant triglyceride of castor oil. Based on the analytical results, these oils represent feedstocks of interest for possible exploitation as advanced biofuels.

Keywords: analytical protocol, biofuels, biorefinery, gas chromatography, vegetable oil

Procedia PDF Downloads 140
28045 Flexible Programmable Circuit Board Electromagnetic 1-D Scanning Micro-Mirror Laser Rangefinder by Active Triangulation

Authors: Vixen Joshua Tan, Siyuan He

Abstract:

Scanners have been implemented within single point laser rangefinders, to determine the ranges within an environment by sweeping the laser spot across the surface of interest. The research motivation is to exploit a smaller and cheaper alternative scanning component for the emitting portion within current designs of laser rangefinders. This research implements an FPCB (Flexible Programmable Circuit Board) Electromagnetic 1-Dimensional scanning micro-mirror as a scanning component for laser rangefinding by means of triangulation. The prototype uses a laser module, micro-mirror, and receiver. The laser module is infrared (850 nm) with a power output of 4.5 mW. The receiver consists of a 50 mm convex lens and a 45mm 1-dimensional PSD (Position Sensitive Detector) placed at the focal length of the lens at 50 mm. The scanning component is an elliptical Micro-Mirror attached onto an FPCB Structure. The FPCB structure has two miniature magnets placed symmetrically underneath it on either side, which are then electromagnetically actuated by small solenoids, causing the FPCB to mechanically rotate about its torsion beams. The laser module projects a laser spot onto the micro-mirror surface, hence producing a scanning motion of the laser spot during the rotational actuation of the FPCB. The receiver is placed at a fixed distance from the micro-mirror scanner and is oriented to capture the scanning motion of the laser spot during operation. The elliptical aperture dimensions of the micro-mirror are 8mm by 5.5 mm. The micro-mirror is supported by an FPCB with two torsion beams with dimensions of 4mm by 0.5mm. The overall length of the FPCB is 23 mm. The voltage supplied to the solenoids is sinusoidal with an amplitude of 3.5 volts and 4.5 volts to achieve optical scanning angles of +/- 10 and +/- 17 degrees respectively. The operating scanning frequency during experiments was 5 Hz. For an optical angle of +/- 10 degrees, the prototype is capable of detecting objects within the ranges from 0.3-1.2 meters with an error of less than 15%. As for an optical angle of +/- 17 degrees the measuring range was from 0.3-0.7 meters with an error of 16% or less. Discrepancy between the experimental and actual data is possibly caused by misalignment of the components during experiments. Furthermore, the power of the laser spot collected by the receiver gradually decreased as the object was placed further from the sensor. A higher powered laser will be tested to potentially measure further distances more accurately. Moreover, a wide-angled lens will be used in future experiments when higher scanning angles are used. Modulation within the current and future higher powered lasers will be implemented to enable the operation of the laser rangefinder prototype without the use of safety goggles.

Keywords: FPCB electromagnetic 1-D scanning micro-mirror, laser rangefinder, position sensitive detector, PSD, triangulation

Procedia PDF Downloads 132
28044 Language Factor in the Formation of National and Cultural Identity of Kazakhstan

Authors: Andabayeva Dina, Avakova Raushangul, Kortabayeva Gulzhamal, Rakhymbay Bauyrzhan

Abstract:

This article attempts to give an overview of the language situation and language planning in Kazakhstan. Statistical data is given and excursion to history of languages in Kazakhstan is done. Particular emphasis is placed on the national- cultural component of the Kazakh people, namely the impact of the specificity of the Kazakh language on ethnic identity. Language is one of the basic aspects of national identity. Recently, in the Republic of Kazakhstan purposeful work on language development has been conducted. Optimal solution of language problems is a factor of interethnic relations harmonization, strengthening and consolidation of the peoples and public consent. Development of languages - one of the important directions of the state policy in the Republic of Kazakhstan. The problem of the state language, as part of national (civil) identification play a huge role in the successful integration process of Kazakh society. And quite rightly assume that one of the foundations of a new civic identity is knowing Kazakh language by all citizens of Kazakhstan. The article is an analysis of the language situation in Kazakhstan in close connection with the peculiarities of cultural identity.

Keywords: Kazakhstan, mentality, language policy, ethnolinguistics, language planning, language personality

Procedia PDF Downloads 628
28043 Crime Victim Support Services in Bangladesh: An Analysis

Authors: Mohammad Shahjahan, Md. Monoarul Haque

Abstract:

In the research work information and data were collected from both types of sources, direct and indirect. Numerological, qualitative and participatory analysis methods have been followed. There were two principal sources of collecting information and data. Firstly, the data provided by the service recipients (300 nos. of women and children victims) in the Victim Support Centre and service providing policemen, executives and staffs (60 nos.). Secondly, data collected from Specialists, Criminologists and Sociologists involved in victim support services through Consultative Interview, KII, Case Study and FGD etc. The initial data collection has been completed with the help of questionnaires as per strategic variations and with the help of guidelines. It is to be noted that the main objective of this research was to determine whether services provided to the victims for their facilities, treatment/medication and rehabilitation by different government/non-government organizations was veritable at all. At the same time socio-economic background and demographic characteristics of the victims have also been revealed through this research. The results of the study show that although the number of victims has increased gradually due to socio-economic, political and cultural realities in Bangladesh, the number of victim support centers has not increased as expected. Awareness among the victims about the effectiveness of the 8 centers working in this regard is also not up to the mark. Two thirds of the victims coming to get service were not cognizant regarding the victim support services at all before getting the service. Most of those who have finally been able to come under the services of the Victim Support Center through various means, have received sheltering (15.5%), medical services (13.32%), counseling services (13.10%) and legal aid (12.66%). The opportunity to stay in security custody and psycho-physical services were also notable. Usually, women and children from relatively poor and marginalized families of the society come to victim support center for getting services. Among the women, young unmarried women are the biggest victims of crime. Again, women and children employed as domestic workers are more affected. A number of serious negative impacts fall on the lives of the victims. Being deprived of employment opportunities (26.62%), suffering from psycho-somatic disorder (20.27%), carrying sexually transmitted diseases (13.92%) are among them. It seems apparent to urgently enact distinct legislation, increase the number of Victim Support Centers, expand the area and purview of services and take initiative to increase public awareness and to create mass movement.

Keywords: crime, victim, support, Bangladesh

Procedia PDF Downloads 84
28042 The Richtmyer-Meshkov Instability Impacted by the Interface with Different Components Distribution

Authors: Sheng-Bo Zhang, Huan-Hao Zhang, Zhi-Hua Chen, Chun Zheng

Abstract:

In this paper, the Richtmyer-Meshkov instability has been studied numerically by using the high-resolution Roe scheme based on the two-dimensional unsteady Euler equation, which was caused by the interaction between shock wave and the helium circular light gas cylinder with different component distributions. The numerical results further discuss the deformation process of the gas cylinder, the wave structure of the flow field and quantitatively analyze the characteristic dimensions (length, height, and central axial width) of the gas cylinder, the volume compression ratio of the cylinder over time. In addition, the flow mechanism of shock-driven interface gas mixing is analyzed from multiple perspectives by combining it with the flow field pressure, velocity, circulation, and gas mixing rate. Then the effects of different initial component distribution conditions on interface instability are investigated. The results show when the diffusion interface transit to the sharp interface, the reflection coefficient gradually increases on both sides of the interface. When the incident shock wave interacts with the cylinder, the transmission of the shock wave will transit from conventional transmission to unconventional transmission. At the same time, the reflected shock wave is gradually strengthened, and the transmitted shock wave is gradually weakened, which leads to an increase in the Richtmyer-Meshkov instability. Moreover, the Atwood number on both sides of the interface also increases as the diffusion interface transit to the sharp interface, which leads to an increase in the Rayleigh-Taylor instability and the Kelvin-Helmholtz instability. Therefore, the increase in instability will lead to an increase the circulation, resulting in an increase in the growth rate of gas mixing rate.

Keywords: shock wave, He light cylinder, Richtmyer-Meshkov instability, Gaussian distribution

Procedia PDF Downloads 70
28041 Separating Permanent and Induced Magnetic Signature: A Simple Approach

Authors: O. J. G. Somsen, G. P. M. Wagemakers

Abstract:

Magnetic signature detection provides sensitive detection of metal objects, especially in the natural environment. Our group is developing a tabletop setup for magnetic signatures of various small and model objects. A particular issue is the separation of permanent and induced magnetization. While the latter depends only on the composition and shape of the object, the former also depends on the magnetization history. With common deperming techniques, a significant permanent signature may still remain, which confuses measurements of the induced component. We investigate a basic technique of separating the two. Measurements were done by moving the object along an aluminum rail while the three field components are recorded by a detector attached near the center. This is done first with the rail parallel to the Earth magnetic field and then with anti-parallel orientation. The reversal changes the sign of the induced- but not the permanent magnetization so that the two can be separated. Our preliminary results on a small iron block show excellent reproducibility. A considerable permanent magnetization was indeed present, resulting in a complex asymmetric signature. After separation, a much more symmetric induced signature was obtained that can be studied in detail and compared with theoretical calculations.

Keywords: magnetic signature, data analysis, magnetization, deperming techniques

Procedia PDF Downloads 447
28040 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor

Authors: Manish Chand, Subhrojit Bagchi, R. Kumar

Abstract:

A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.

Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code

Procedia PDF Downloads 151
28039 Petrology and Finite Strain of the Al Amar Region, Northern Ar-Rayn Terrane, Eastern Arabian Shield, Saudi Arabia

Authors: Lami Mohammed, Hussain J. Al Faifi, Abdel Aziz Al Bassam, Osama M. K. Kassem

Abstract:

The Neoproterozoic basement rocks of the Ar Rayn terrane have been identified as parts of the Eastern Arabian Shield. It focuses on the petrological and finite strain properties to display the tectonic setting of the Al Amar suture for high deformed volcanic and granitoids rocks. The volcanic rocks are classified into two major series: the eastern side cycle, which includes dacite, rhyodacite, rhyolite, and ignimbrites, and the western side cycle, which includes andesite and pyroclastics. Granitoids rocks also contain monzodiorite, tonalite, granodiorite, and alkali-feldspar granite. To evaluate the proportions of shear contributions in penetratively deformed rocks. Asymmetrical porphyroclast and sigmoidal structural markers along the suture's strike, namely the Al Amar, are expected to reveal strain factors. The Rf/phi and Fry techniques are used to characterize quartz and feldspar porphyroclast, biotite, and hornblende grains in Abt schist, high deformed volcanic rock, and granitoids. The findings exposed that these rocks had experienced shape flattening, finite strain accumulation, and overall volume loss. The magnitude of the strain appears to increase across the nappe contacts with neighboring lithologies. Subhorizontal foliation likely developed in tandem with thrusting and nappe stacking, almost parallel to tectonic contacts. The ductile strain accumulation that occurred during thrusting along the Al Amar suture mostly includes a considerable pure shear component. Progressive thrusting by overlaid transpression and oblique convergence is shown by stacked nappes and diagonal stretching lineations along the thrust axes. The subhorizontal lineation might be the result of the suture's most recent activity. The current study's findings contradict the widely accepted model that links orogen-scale structures in the Arabian Shield to oblique convergence with dominant simple shear deformation. A significant pure shear component/crustal thickening increment should have played a significant role in the evolution of the suture and thus in the Shield's overall deformation history. This foliation was primarily generated by thrusting nappes together, showing that nappe stacking was linked to substantial vertical shortening induced by the active Al Amar suture on a massive scale.

Keywords: petrology, finite strain analysis, al amar region, ar-rayn terrane, Arabian shield

Procedia PDF Downloads 116
28038 ScRNA-Seq RNA Sequencing-Based Program-Polygenic Risk Scores Associated with Pancreatic Cancer Risks in the UK Biobank Cohort

Authors: Yelin Zhao, Xinxiu Li, Martin Smelik, Oleg Sysoev, Firoj Mahmud, Dina Mansour Aly, Mikael Benson

Abstract:

Background: Early diagnosis of pancreatic cancer is clinically challenging due to vague, or no symptoms, and lack of biomarkers. Polygenic risk score (PRS) scores may provide a valuable tool to assess increased or decreased risk of PC. This study aimed to develop such PRS by filtering genetic variants identified by GWAS using transcriptional programs identified by single-cell RNA sequencing (scRNA-seq). Methods: ScRNA-seq data from 24 pancreatic ductal adenocarcinoma (PDAC) tumor samples and 11 normal pancreases were analyzed to identify differentially expressed genes (DEGs) in in tumor and microenvironment cell types compared to healthy tissues. Pathway analysis showed that the DEGs were enriched for hundreds of significant pathways. These were clustered into 40 “programs” based on gene similarity, using the Jaccard index. Published genetic variants associated with PDAC were mapped to each program to generate program PRSs (pPRSs). These pPRSs, along with five previously published PRSs (PGS000083, PGS000725, PGS000663, PGS000159, and PGS002264), were evaluated in a European-origin population from the UK Biobank, consisting of 1,310 PDAC participants and 407,473 non-pancreatic cancer participants. Stepwise Cox regression analysis was performed to determine associations between pPRSs with the development of PC, with adjustments of sex and principal components of genetic ancestry. Results: The PDAC genetic variants were mapped to 23 programs and were used to generate pPRSs for these programs. Four distinct pPRSs (P1, P6, P11, and P16) and two published PRSs (PGS000663 and PGS002264) were significantly associated with an increased risk of developing PC. Among these, P6 exhibited the greatest hazard ratio (adjusted HR[95% CI] = 1.67[1.14-2.45], p = 0.008). In contrast, P10 and P4 were associated with lower risk of developing PC (adjusted HR[95% CI] = 0.58[0.42-0.81], p = 0.001, and adjusted HR[95% CI] = 0.75[0.59-0.96], p = 0.019). By comparison, two of the five published PRS exhibited an association with PDAC onset with HR (PGS000663: adjusted HR[95% CI] = 1.24[1.14-1.35], p < 0.001 and PGS002264: adjusted HR[95% CI] = 1.14[1.07-1.22], p < 0.001). Conclusion: Compared to published PRSs, scRNA-seq-based pPRSs may be used not only to assess increased but also decreased risk of PDAC.

Keywords: cox regression, pancreatic cancer, polygenic risk score, scRNA-seq, UK biobank

Procedia PDF Downloads 91
28037 The Role of Structure Input in Pi in the Acquisition of English Relative Clauses by L1 Saudi Arabic Speakers

Authors: Faraj Alhamami

Abstract:

The effects of classroom input through structured input activities have been addressing two main lines of inquiry: (1) measuring the effects of structured input activities as a possible causative factor of PI and (2) comparing structured input practice versus other types of instruction or no-training controls. This line of research, the main purpose of this classroom-based research, was to establish which type of activities is the most effective in processing instruction, whether it is the explicit information component and referential activities only or the explicit information component and affective activities only or a combination of the two. The instruments were: a) grammatical judgment task, b) Picture-cued task, and c) a translation task as pre-tests, post-tests and delayed post-tests seven weeks after the intervention. While testing is ongoing, preliminary results shows that the examination of participants' pre-test performance showed that all five groups - the processing instruction including both activities (RA), Traditional group (TI), Referential group (R), Affective group (A), and Control group - performed at a comparable chance or baseline level across the three outcome measures. However, at the post-test stage, the RA, TI, R, and A groups demonstrated significant improvement compared to the Control group in all tasks. Furthermore, significant difference was observed among PI groups (RA, R, and A) at post-test and delayed post-test on some of the tasks when compared to traditional group. Therefore, the findings suggest that the use of the sole application and/or the combination of the structured input activities has succeeded in helping Saudi learners of English make initial form-meaning connections and acquire RRCs in the short and the long term.

Keywords: input processing, processing instruction, MOGUL, structure input activities

Procedia PDF Downloads 71
28036 The Impact of Total Quality Management Practices on Innovation: An Empirical Study

Authors: Oumayma Tajouri

Abstract:

The relationship between total quality management (TQM) practices and innovation is conflictual. Some scholars suggest that TQM has an effect on incremental improvement and would not lead to innovation and creativity. The purpose of this paper is to analyse the association between TQM and different types of innovation. Our goal is to examine to what extent the implementation of TQM practices is indeed supporting innovation in the Tunisian ISO 9001 certified industries. Using a self-administered survey to sample ISO9001 certified industry companies, this study examines five hypotheses and tests the relation between TQM practices and innovation. The principal finding of this study is that TQM has significant and positive effects on innovation in the Tunisian context. The results support that TQM has an influence on incremental, radical, and administrative innovation.

Keywords: total quality management, incremental innovation product and/service, radical innovation product/service, incremental innovation process, radical innovation process, administrative innovation

Procedia PDF Downloads 150
28035 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 115
28034 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments

Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic

Abstract:

Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).

Keywords: Croatia, forest fire, geospatial analysis, hydrological response

Procedia PDF Downloads 129
28033 Isolated and Combined Effects of Multimedia Computer Assisted Coaching and Traditional Coaching on Motor Ability Component and Physiological Variables among Sports School Basketball Players

Authors: Biju Lukose

Abstract:

The objective of the study was to identify the isolated and combined effect of multi-media computer assisted coaching and traditional coaching on selected motor ability component and physiological variables among sports school basketball players. Forty male basketball players aged between 14 to 18 years were selected randomly. They were divided into four groups of three experimental and one control. Isolated multi-media computer assisted coaching, isolated traditional coaching and combined coaching (multimedia computer assisted coaching and traditional coaching) are the three experimental groups. All the three experimental groups were given coaching for 24 weeks and control group were not allowed to participate in any coaching programme. The subjects were tested dependent variables such as speed and cardio vascular endurance; at the beginning (pre-test) in middle 12 week (mid-test) and after the coaching 24 week (post-test). The coaching schedule was for a period of 24 weeks. The data were collected two days before and after the coaching schedule and mid test after the 12 weeks of the coaching schedule. The data were analysed by applying ANCOVA and Scheffe’s Post hoc test. The result showed that there were significant changes in dependent variables such as speed and cardio vascular endurance. The results of the study showed that combined coaching (multimedia computer assisted coaching and traditional coaching) is more superior to traditional coaching and multimedia computer assisted coaching groups and no significant change in speed in the case of isolated multimedia computer assisted coaching group.

Keywords: computer, computer-assisted coaching, multimedia coaching, traditional coaching

Procedia PDF Downloads 449
28032 Genome-Wide Homozygosity Analysis of the Longevous Phenotype in the Amish Population

Authors: Sandra Smieszek, Jonathan Haines

Abstract:

Introduction: Numerous research efforts have focused on searching for ‘longevity genes’. However, attempting to decipher the genetic component of the longevous phenotype have resulted in limited success and the mechanisms governing longevity remain to be explained. We conducted a genome-wide homozygosity analysis (GWHA) of the founder population of the Amish community in central Ohio. While genome-wide association studies using unrelated individuals have revealed many interesting longevity associated variants, these variants are typically of small effect and cannot explain the observed patterns of heritability for this complex trait. The Amish provide a large cohort of extended kinships allowing for in depth analysis via family-based approach excellent population due to its. Heritability of longevity increases with age with significant genetic contribution being seen in individuals living beyond 60 years of age. In our present analysis we show that the heritability of longevity is estimated to be increasing with age particularly on the paternal side. Methods: The present analysis integrated both phenotypic and genotypic data and led to the discovery of a series of variants, distinct for stratified populations across ages and distinct for paternal and maternal cohorts. Specifically 5437 subjects were analyzed and a subset of 893 successfully genotyped individuals was used to assess CHIP heritability. We have conducted the homozygosity analysis to examine if homozygosity is associated with increased risk of living beyond 90. We analyzed AMISH cohort genotyped for 614,957 SNPs. Results: We delineated 10 significant regions of homozygosity (ROH) specific for the age group of interest (>90). Of particular interest was ROH on chromosome 13, P < 0.0001. The lead SNPs rs7318486 and rs9645914 point to COL4A2 and our lead SNP. COL25A1 encodes one of the six subunits of type IV collagen, the C-terminal portion of the protein, known as canstatin, is an inhibitor of angiogenesis and tumor growth. COL4A2 mutations have been reported with a broader spectrum of cerebrovascular, renal, ophthalmological, cardiac, and muscular abnormalities. The second region of interest points to IRS2. Furthermore we built a classifier using the obtained SNPs from the significant ROH region with 0.945 AUC giving ability to discriminate between those living beyond to 90 years of age and beyond. Conclusion: In conclusion our results suggest that a history of longevity does indeed contribute to increasing the odds of individual longevity. Preliminary results are consistent with conjecture that heritability of longevity is substantial when we start looking at oldest fifth and smaller percentiles of survival specifically in males. We will validate all the candidate variants in independent cohorts of centenarians, to test whether they are robustly associated with human longevity. The identified regions of interest via ROH analysis could be of profound importance for the understanding of genetic underpinnings of longevity.

Keywords: regions of homozygosity, longevity, SNP, Amish

Procedia PDF Downloads 229
28031 A Histopathological Study on Leech (Hirudo medicinalis) Application in the Management of Vicarcikā (Eczema)

Authors: K. M. Pratap Shankar, Dattatreya Rao, Sai Prasad

Abstract:

Background: Skin diseases are among the most common health problems worldwide and are associated with a considerable burden. Eczema is such a skin ailment which cause psychological, social and financial burden on the patient and their families. Management of eczema with antibiotics, antihistamines, steroids etc., are available but even after their use relapses, recurrences and other complications are very common. Aim: The aim of this study was to assess the efficacy of leech application in the management of vicarcikā (Eczema) with Histopathological study. Methods: For the present study 10 patients having the classical symptoms of Vicarcikā, were randomly selected as per the inclusion and exclusion criteria from O.P.D. & I.P.D. sections of Śalya department, S.V. Āyurvedic Hospital, Tirupati. Minimum 4 sittings of Leech application was carried out with seven days interval. Total duration of treatment was 6 weeks. Biopsy samples were collected from the lesion site before and after treatment. Histopathological examination was done by the pathologist. Results: In eczema (dermatitis) the leech application therapy gives excellent response by reducing the inflammatory component, hyperkeratosis, spongiosis, irregular acanthosis and by evoking a granulation tissue response in the dermis and in most of the cases with complete recovery from the lesion. Most of the cases in the study were chronic dermatitis and sebhoric keratosis, almost all local/focal pigmented lesions is totally relieved by leech therapy especially in cases of sebhoric keratosis. Conclusion: In the present study it was found that, leech application evokes significant changes at histological level specifically in reduction of inflammatory component, hyperkeratosis, spongiosis and irregular acanthosis. It was also found that there was a considerable formation of granulation tissue, which helps in formation of healthy new tissues.

Keywords: acanthosis, eczema, hyperkeratosis, leech application, spongiosis

Procedia PDF Downloads 294