Search results for: hypotenuse leg difference method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21967

Search results for: hypotenuse leg difference method

12907 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China

Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding

Abstract:

The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.

Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2

Procedia PDF Downloads 299
12906 The Effect of Implant Design on the Height of Inter-Implant Bone Crest: A 10-Year Retrospective Study of the Astra Tech Implant and Branemark Implant

Authors: Daeung Jung

Abstract:

Background: In case of patients with missing teeth, multiple implant restoration has been widely used and is inevitable. To increase its survival rate, it is important to understand the influence of different implant designs on inter-implant crestal bone resorption. There are several implant systems designed to minimize loss of crestal bone, and the Astra Tech and Brånemark Implant are two of them. Aim/Hypothesis: The aim of this 10-year study was to compare the height of inter-implant bone crest in two implant systems; the Astra Tech and the Brånemark implant system. Material and Methods: In this retrospective study, 40 consecutively treated patients were utilized; 23 patients with 30 sites for Astra Tech system and 17 patients with 20 sites for Brånemark system. The implant restoration was comprised of splinted crown in partially edentulous patients. Radiographs were taken immediately after 1st surgery, at impression making, at prosthetics setting, and annually after loading. Lateral distance from implant to bone crest, inter-implant distance was gauged, and crestal bone height was measured from the implant shoulder to the first bone contact. Calibrations were performed with known length of thread pitch distance for vertical measurement, and known diameter of abutment or fixture for horizontal measurement using ImageJ. Results: After 10 years, patients treated with Astra Tech implant system demonstrated less inter-implant crestal bone resorption when implants had a distance of 3mm or less between them. In cases of implants that had a greater than 3 mm distance between them, however, there appeared to be no statistically significant difference in crestal bone loss between two systems. Conclusion and clinical implications: In the situation of partially edentulous patients planning to have more than two implants, the inter-implant distance is one of the most important factors to be considered. If it is impossible to make sure of having sufficient inter-implant distance, the implants with less micro gap in the fixture-abutment junction, less traumatic 2nd surgery approach, and the adequate surface topography would be choice of appropriate options to minimize inter-implant crestal bone resorption.

Keywords: implant design, crestal bone loss, inter-implant distance, 10-year retrospective study

Procedia PDF Downloads 145
12905 Calculation of Fractal Dimension and Its Relation to Some Morphometric Characteristics of Iranian Landforms

Authors: Mitra Saberi, Saeideh Fakhari, Amir Karam, Ali Ahmadabadi

Abstract:

Geomorphology is the scientific study of the characteristics of form and shape of the Earth's surface. The existence of types of landforms and their variation is mainly controlled by changes in the shape and position of land and topography. In fact, the interest and application of fractal issues in geomorphology is due to the fact that many geomorphic landforms have fractal structures and their formation and transformation can be explained by mathematical relations. The purpose of this study is to identify and analyze the fractal behavior of landforms of macro geomorphologic regions of Iran, as well as studying and analyzing topographic and landform characteristics based on fractal relationships. In this study, using the Iranian digital elevation model in the form of slopes, coefficients of deposition and alluvial fan, the fractal dimensions of the curves were calculated through the box counting method. The morphometric characteristics of the landforms and their fractal dimension were then calculated for 4criteria (height, slope, profile curvature and planimetric curvature) and indices (maximum, Average, standard deviation) using ArcMap software separately. After investigating their correlation with fractal dimension, two-way regression analysis was performed and the relationship between fractal dimension and morphometric characteristics of landforms was investigated. The results show that the fractal dimension in different pixels size of 30, 90 and 200m, topographic curves of different landform units of Iran including mountain, hill, plateau, plain of Iran, from1.06in alluvial fans to1.17in The mountains are different. Generally, for all pixels of different sizes, the fractal dimension is reduced from mountain to plain. The fractal dimension with the slope criterion and the standard deviation index has the highest correlation coefficient, with the curvature of the profile and the mean index has the lowest correlation coefficient, and as the pixels become larger, the correlation coefficient between the indices and the fractal dimension decreases.

Keywords: box counting method, fractal dimension, geomorphology, Iran, landform

Procedia PDF Downloads 74
12904 Photocatalytic Properties of Pt/Er-KTaO3

Authors: Anna Krukowska, Tomasz Klimczuk, Adriana Zaleska-Medynska

Abstract:

Photoactive materials have attracted attention due to their potential application in the degradation of environmental pollutants to non-hazardous compounds in an eco-friendly route. Among semiconductor photocatalysts, tantalates such as potassium tantalate (KTaO3) is one of the excellent functional photomaterial. However, tantalates-based materials are less active under visible-light irradiation, the enhancement in photoactivity could be improved with the modification of opto-eletronic properties of KTaO3 by doping rare earth metal (Er) and further photodeposition of noble metal nanoparticles (Pt). Inclusion of rare earth element in orthorhombic structure of tantalate can generate one high-energy photon by absorbing two or more incident low-energy photons, which convert visible-light and infrared-light into the ultraviolet-light to satisfy the requirement of KTaO3 photocatalysts. On the other hand, depositions of noble metal nanoparticles on the surface of semiconductor strongly absorb visible-light due to their surface plasmon resonance, in which their conducting electrons undergo a collective oscillation induced by electric field of visible-light. Furthermore, the high dispersion of Pt nanoparticles, which will be obtained by photodeposition process is additional important factor to improve the photocatalytic activity. The present work is aimed to study the effect of photocatalytic process of the prepared Er-doped KTaO3 and further incorporation of Pt nanoparticles by photodeposition. Moreover, the research is also studied correlations between photocatalytic activity and physico-chemical properties of obtained Pt/Er-KTaO3 samples. The Er-doped KTaO3 microcomposites were synthesized by a hydrothermal method. Then photodeposition method was used for Pt loading over Er-KTaO3. The structural and optical properties of Pt/Er-KTaO3 photocatalytic were characterized using scanning electron microscope (SEM), X-ray diffraction (XRD), volumetric adsorption method (BET), UV-Vis absorption measurement, Raman spectroscopy and luminescence spectroscopy. The photocatalytic properties of Pt/Er-KTaO3 microcomposites were investigated by degradation of phenol in aqueous phase as model pollutant under visible and ultraviolet-light irradiation. Results of this work show that all the prepared photocatalysis exhibit low BET surface area, although doping of the bare KTaO3 with rare earth element (Er) presents a slight increase in this value. The crystalline structure of Pt/Er-KTaO3 powders exhibited nearly identical positions for the main peak at about 22,8o and the XRD pattern could be assigned to an orthorhombic distorted perovskite structure. The Raman spectra of obtained semiconductors confirmed demonstrating perovskite-like structure. The optical absorption spectra of Pt nanoparticles exhibited plasmon absorption band for main peaks at about 216 and 264 nm. The addition of Pt nanoparticles increased photoactivity compared to Er-KTaO3 and pure KTaO3. Summary optical properties of KTaO3 change with its doping Er-element and further photodeposition of Pt nanoparticles.

Keywords: heterogeneous photocatalytic, KTaO3 photocatalysts, Er3+ ion doping, Pt photodeposition

Procedia PDF Downloads 351
12903 The Contribution of Sanitation Practices to Marine Pollution and the Prevalence of Water-Borne Diseases in Prampram Coastal Area, Greater Accra-Ghana

Authors: Precious Roselyn Obuobi

Abstract:

Background: In Ghana, water-borne diseases remain a public health concern due to its impact. While marine pollution has been linked to outbreak of diseases especially in communities along the coast, associated risks such as oil spillage, marine debris, erosion, improper waste disposal and management practices persist. Objective: The study seeks to investigate sanitation practices that contribute to marine pollution in Prampram and the prevalence of selected water-borne diseases (diarrhea and typhoid fever). Method: This study used a descriptive cross-sectional design, employing the mix-method (qualitative and quantitative) approach. Twenty-two (22) participants were selected and semistructured questionnaire were administered to them. Additionally, interviews were conducted to collect more information. Further, an observation check-list was used to aid the data collection process. Secondary data comprising information on water-borne diseases in the district was acquired from the district health directorate to determine the prevalence of selected water-borne diseases in the community. Data Analysis: The qualitative data was analyzed using NVIVO® software by adapting the six steps thematic analysis by Braun and Clarke whiles STATA® version 16 was used to analyze the secondary data collected from the district health directorate. A descriptive statistic employed using mean, standard deviation, frequencies and proportions were used to summarize the results. Results: The results showed that open defecation and indiscriminate waste disposal were the main practices contributing to marine pollution in Prampram and its effect on public health. Conclusion: These findings have implications on public health and the environment, thus effort needs to be stepped up in educating the community on best sanitation practices.

Keywords: environment, sanitation, marine pollution, water-borne diseases

Procedia PDF Downloads 56
12902 Micromechanical Modelling of Ductile Damage with a Cohesive-Volumetric Approach

Authors: Noe Brice Nkoumbou Kaptchouang, Pierre-Guy Vincent, Yann Monerie

Abstract:

The present work addresses the modelling and the simulation of crack initiation and propagation in ductile materials which failed by void nucleation, growth, and coalescence. One of the current research frameworks on crack propagation is the use of cohesive-volumetric approach where the crack growth is modelled as a decohesion of two surfaces in a continuum material. In this framework, the material behavior is characterized by two constitutive relations, the volumetric constitutive law relating stress and strain, and a traction-separation law across a two-dimensional surface embedded in the three-dimensional continuum. Several cohesive models have been proposed for the simulation of crack growth in brittle materials. On the other hand, the application of cohesive models in modelling crack growth in ductile material is still a relatively open field. One idea developed in the literature is to identify the traction separation for ductile material based on the behavior of a continuously-deforming unit cell failing by void growth and coalescence. Following this method, the present study proposed a semi-analytical cohesive model for ductile material based on a micromechanical approach. The strain localization band prior to ductile failure is modelled as a cohesive band, and the Gurson-Tvergaard-Needleman plasticity model (GTN) is used to model the behavior of the cohesive band and derived a corresponding traction separation law. The numerical implementation of the model is realized using the non-smooth contact method (NSCD) where cohesive models are introduced as mixed boundary conditions between each volumetric finite element. The present approach is applied to the simulation of crack growth in nuclear ferritic steel. The model provides an alternative way to simulate crack propagation using the numerical efficiency of cohesive model with a traction separation law directly derived from porous continuous model.

Keywords: ductile failure, cohesive model, GTN model, numerical simulation

Procedia PDF Downloads 133
12901 Application of a Lighting Design Method Using Mean Room Surface Exitance

Authors: Antonello Durante, James Duff, Kevin Kelly

Abstract:

The visual needs of people in modern work based buildings are changing. Self-illuminated screens of computers, televisions, tablets and smart phones have changed the relationship between people and the lit environment. In the past, lighting design practice was primarily based on providing uniform horizontal illuminance on the working plane, but this has failed to ensure good quality lit environments. Lighting standards of today continue to be set based upon a 100 year old approach that at its core, considers the task illuminance of the utmost importance, with this task typically being located on a horizontal plane. An alternative method focused on appearance has been proposed, as opposed to the traditional performance based approach. Mean Room Surface Exitance (MRSE) and Target-Ambient Illuminance Ratio (TAIR) are two new metrics proposed to assess illumination adequacy in interiors. The hypothesis is that these factors will be superior to the existing metrics used, which are horizontal illuminance led. For the six past years, research has examined this, within the Dublin Institute of Technology, with a view to determining the suitability of this approach for application to general lighting practice. Since the start of this research, a number of key findings have been produced that centered on how occupants will react to various levels of MRSE. This paper provides a broad update on how this research has progressed. More specifically, this paper will: i) Demonstrate how MRSE can be measured using HDR images technology, ii) Illustrate how MRSE can be calculated using scripting and an open source lighting computation engine, iii) Describe experimental results that demonstrate how occupants have reacted to various levels of MRSE within experimental office environments.

Keywords: illumination hierarchy (IH), mean room surface exitance (MRSE), perceived adequacy of illumination (PAI), target-ambient illumination ratio (TAIR)

Procedia PDF Downloads 169
12900 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 311
12899 The Impact of Urbanisation on Sediment Concentration of Ginzo River in Katsina City, Katsina State, Nigeria

Authors: Ahmed A. Lugard, Mohammed A. Aliyu

Abstract:

This paper studied the influence of urban development and its accompanied land surface transformation on sediment concentration of a natural flowing Ginzo river across the city of Katsina. An opposite twin river known as Tille river, which is less urbanized, was used to compare the result of the sediment concentration of the Ginzo River in order to ascertain the consequences of the urban area on impacting the sediment concentration. An instrument called USP 61 point integrating cable way sampler described by Gregory and walling (1973), was used to collect the suspended sediment samples in the wet season months of June, July, August and September. The result obtained in the study shows that only the sample collected at the peripheral site of the city, which is mostly farmland areas resembles the results in the four sites of Tille river, which is the reference stream in the study. It was found to be only + 10% different from one another, while at the other three sites of the Ginzo which are highly urbanized the disparity ranges from 35-45% less than what are obtained at the four sites of Tille River. In the generalized assessment, the t-distribution result applied to the two set of data shows that there is a significant difference between the sediment concentration of urbanized River Ginzo and that of less urbanized River Tille. The study further discovered that the less sediment concentration found in urbanized River Ginzo is attributed to concretization of surfaced, tarred roads, concretized channeling of segments of the river including the river bed and reserved open grassland areas, all within the catchments. The study therefore concludes that urbanization affect not only the hydrology of an urbanized river basin, but also the sediment concentration which is a significant aspect of its geomorphology. This world certainly affects the flood plain of the basin at a certain point which might be a suitable land for cultivation. It is recommended here that further studies on the impact of urbanization on River Basins should focus on all elements of geomorphology as it has been on hydrology. This would make the work rather complete as the two disciplines are inseparable from each other. The authorities concern should also trigger a more proper environmental and land use management policies to arrest the menace of land degradation and related episodic events.

Keywords: environment, infiltration, river, urbanization

Procedia PDF Downloads 302
12898 Investigating the Relationship Between Alexithymia and Mobile Phone Addiction Along with the Mediating Role of Anxiety, Stress and Depression: A Path Analysis Study and Structural Model Testing

Authors: Pouriya Darabiyan, Hadis Nazari, Kourosh Zarea, Saeed Ghanbari, Zeinab Raiesifar, Morteza Khafaie, Hanna Tuvesson

Abstract:

Introduction Since the beginning of mobile phone addiction, alexithymia, depression, anxiety and stress have been stated as risk factors for Internet addiction, so this study was conducted with the aim of investigating the relationship between Alexithymia and Mobile phone addiction along with the mediating role of anxiety, stress and depression. Materials and methods In this descriptive-analytical and cross-sectional study in 2022, 412 students School of Nursing & Midwifery of Ahvaz Jundishapur University of Medical Sciences were included in the study using available sampling method. Data collection tools were: Demographic Information Questionnaire, Toronto Alexithymia Scale (TAS-20), Depression, Anxiety, Stress Scale (DASS-21) and Mobile Phone Addiction Index (MPAI). Frequency, Pearson correlation coefficient test and linear regression were used to describe and analyze the data. Also, structural equation models and path analysis method were used to investigate the direct and indirect effects as well as the total effect of each dimension of Alexithymia on Mobile phone addiction with the mediating role of stress, depression and anxiety. Statistical analysis was done by SPSS version 22 and Amos version 16 software. Results Alexithymia was a predictive factor for mobile phone addiction. Also, Alexithymia had a positive and significant effect on depression, anxiety and stress. Depression, anxiety and stress had a positive and significant effect on mobile phone addiction. Depression, anxiety and stress variables played the role of a relative mediating variable between Alexithymia and mobile phone addiction. Alexithymia through depression, anxiety and stress also has an indirect effect on Internet addiction. Conclusion Alexithymia is a predictive factor for mobile phone addiction; And the variables of depression, anxiety and stress play the role of a relative mediating variable between Alexithymia and mobile phone addiction.

Keywords: alexithymia, mobile phone, depression, anxiety, stress

Procedia PDF Downloads 78
12897 Shear Strength and Consolidation Behavior of Clayey Soil with Vertical and Radial Drainage

Authors: R. Pillai Aparna, S. R. Gandhi

Abstract:

Soft clay deposits having low strength and high compressibility are found all over the world. Preloading with vertical drains is a widely used method for improving such type of soils. The coefficient of consolidation, irrespective of the drainage type, plays an important role in the design of vertical drains and it controls accurate prediction of the rate of consolidation of soil. Also, the increase in shear strength of soil with consolidation is another important factor considered in preloading or staged construction. To our best knowledge no clear guidelines are available to estimate the increase in shear strength for a particular degree of consolidation (U) at various stages during the construction. Various methods are available for finding out the consolidation coefficient. This study mainly focuses on the variation of, consolidation coefficient which was found out using different methods and shear strength with pressure intensity. The variation of shear strength with the degree of consolidation was also studied. The consolidation test was done using two types of highly compressible clays with vertical, radial and a few with combined drainage. The test was carried out at different pressures intensities and for each pressure intensity, once the target degree of consolidation is achieved, vane shear test was done at different locations in the sample, in order to determine the shear strength. The shear strength of clayey soils under the application of vertical stress with vertical and radial drainage with target U value of 70% and 90% was studied. It was found that there is not much variation in cv or cr value beyond 80kPa pressure intensity. Correlations were developed between shear strength ratio and consolidation pressure based on laboratory testing under controlled condition. It was observed that the shear strength of sample with target U value of 90% is about 1.4 to 2 times than that of 70% consolidated sample. Settlement analysis was done using Asaoka’s and hyperbolic method. The variation of strength with respect to the depth of sample was also studied, using large-scale consolidation test. It was found, based on the present study that the gain in strength is more on the top half of the clay layer, and also the shear strength of the sample ensuring radial drainage is slightly higher than that of the vertical drainage.

Keywords: consolidation coefficient, degree of consolidation, PVDs, shear strength

Procedia PDF Downloads 217
12896 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 413
12895 Hypoglycemic Activity studies on Root Extracts of Sanseviera liberica Root in Streptozotocin-Induced Diabetic Rats

Authors: Omowunmi Amao

Abstract:

Sansevieria liberica belongs to the family Agavaceae (Ruscaceae or Dracaenaceae). They are widely distributed throughout the tropics. Literature review suggests that in Nigeria, the leaves and roots of Sansevieria liberica are used in traditional medicine for the treatment of asthma, abdominal pains, colic, diarrhea, eczema, gonorrhea, hemorrhoids, hypertension, monorrhagia, piles, sexual weakness, snake bites, and wounds of the foot. In this context, the standardized Methanolic extract of roots of Sansevieria liberica is hypothesized for the evaluation of the hypoglycemic activity. Material and Methods: Inbreed adult male sprague-Dawley albino rats were used in the experiment. The suspension of standardized Methanol extract (ME) of Sansevieria liberica was treated for hypoglycemic activity in oral glucose tolerance test (OGTT) method. The suspension of standardized Methanolic extract (ME) of Sansevieria liberica was also treated for hypoglycemic activity in streptozotocin-induced diabetic rats. Results: The Methanolic extract (ME) of Sanseviera liberica root (100 mg/kg, 200mg/kg, and 400 mg/kg) showed potential hypoglycemic activity in diabetic rats, and further in OGTT method. Furthermore, Methanolic extract of Sanseviera liberica root showed significant (P<0.05) increase in final body weight, total hemoglobin, insulin, albumin and high-density lipoprotein levels, however, decrease in fluid intake, glycosylated hemoglobin, urea, creatinine, total cholesterol, triglyceride and low-density lipoprotein levels. Additionally, it improved oxidative stress in terms of reducing lipid peroxidase and superoxide dismutase, and elevating catalase activity. Conclusions: These findings suggest that the Methanolic extract of Sanseviera liberica root was found to be potential hypoglycemic, and would be a promising candidate for the treatment of diabetes.

Keywords: diabetes, Sanseviera liberica, hypoglycemic activity, diabetes and metabolism

Procedia PDF Downloads 352
12894 Sustainable Technologies for Decommissioning of Nuclear Facilities

Authors: Ahmed Stifi, Sascha Gentes

Abstract:

The German nuclear industry, while implementing the German policy, believes that the journey towards the green-field, namely phasing out of nuclear energy, should be achieved through green techniques. The most important techniques required for the wide range of decommissioning activities are decontamination techniques, cutting techniques, radioactivity measuring techniques, remote control techniques, techniques for worker and environmental protection and techniques for treating, preconditioning and conditioning nuclear waste. Many decontamination techniques are used for removing contamination from metal, concrete or other surfaces like the scales inside pipes. As the pipeline system is one of the important components of nuclear power plants, the process of decontamination in tubing is of more significance. The development of energy sectors like oil sector, gas sector and nuclear sector, since the middle of 20th century, increased the pipeline industry and the research in the decontamination of tubing in each sector is found to serve each other. The extraction of natural products and material through the pipeline can result in scale formation. These scales can be radioactively contaminated through an accumulation process especially in the petrochemical industry when oil and gas are extracted from the underground reservoir. The radioactivity measured in these scales can be significantly high and pose a great threat to people and the environment. At present, the decontamination process involves using high pressure water jets with or without abrasive material and this technology produces a high amount of secondary waste. In order to overcome it, the research team within Karlsruhe Institute of Technology developed a new sustainable method to carry out the decontamination of tubing without producing any secondary waste. This method is based on vibration technique which removes scales and also does not require any auxiliary materials. The outcome of the research project proves that the vibration technique used for decontamination of tubing is environmental friendly in other words a sustainable technique.

Keywords: sustainable technologies, decontamination, pipeline, nuclear industry

Procedia PDF Downloads 292
12893 Degree of Approximation of Functions by Product Means

Authors: Hare Krishna Nigam

Abstract:

In this paper, for the first time, (E,q)(C,2) product summability method is introduced and two quite new results on degree of approximation of the function f belonging to Lip (alpha,r)class and W(L(r), xi(t)) class by (E,q)(C,2) product means of Fourier series, has been obtained.

Keywords: Degree of approximation, (E, q)(C, 2) means, Fourier series, Lebesgue integral, Lip (alpha, r)class, W(L(r), xi(t))class of functions

Procedia PDF Downloads 499
12892 In vitro α-Amylase and α-Glucosidase Inhibitory Activities of Bitter Melon (Momordica charantia) with Different Stage of Maturity

Authors: P. S. Percin, O. Inanli, S. Karakaya

Abstract:

Bitter melon (Momordica charantia) is a medicinal vegetable, which is used traditionally to remedy diabetes. Bitter melon contains several classes of primary and secondary metabolites. In traditional Turkish medicine bitter melon is used for wound healing and treatment of peptic ulcers. Nowadays, bitter melon is used for the treatment of diabetes and ulcerative colitis in many countries. The main constituents of bitter melon, which are responsible for the anti-diabetic effects, are triterpene, protein, steroid, alkaloid and phenolic compounds. In this study total phenolics, total carotenoids and β-carotene contents of mature and immature bitter melons were determined. In addition, in vitro α-amylase and α-glucosidase activities of mature and immature bitter melons were studied. Total phenolic contents of immature and mature bitter melon were 74 and 123 mg CE/g bitter melon respectively. Although total phenolics of mature bitter melon was higher than that of immature bitter melon, this difference was not found statistically significant (p > 0.05). Carotenoids, a diverse group of more than 600 naturally occurring red, orange and yellow pigments, play important roles in many physiological processes both in plants and humans. The total carotenoid content of mature bitter melon was 4.36 fold higher than the total carotenoid content of immature bitter melon. The compounds that have hypoglycaemic effect of bitter melon are steroidal saponins known as charantin, insulin-like peptides and alkaloids. α-Amylase is one of the main enzymes in human that is responsible for the breakdown of starch to more simple sugars. Therefore, the inhibitors of this enzyme can delay the carbohydrate digestion and reduce the rate of glucose absorption. The immature bitter melon extract showed α-amylase and α-glucosidase inhibitory activities in vitro. α-Amylase inhibitory activity was higher than that of α-glucosidase inhibitory activity when IC50 values were compared. In conclusion, the present results provide evidence that aqueous extract of bitter melon may have an inhibitory effect on carbohydrate breakdown enzymes.

Keywords: bitter melon, in vitro antidiabetic activity, total carotenoids, total phenols

Procedia PDF Downloads 230
12891 Comparison between the Roller-Foam and Neuromuscular Facilitation Stretching on Flexibility of Hamstrings Muscles

Authors: Paolo Ragazzi, Olivier Peillon, Paul Fauris, Mathias Simon, Raul Navarro, Juan Carlos Martin, Oriol Casasayas, Laura Pacheco, Albert Perez-Bellmunt

Abstract:

Introduction: The use of stretching techniques in the sports world is frequent and widely used for its many effects. One of the main benefits is the gain in flexibility, range of motion and facilitation of the sporting performance. Recently the use of Roller-Foam (RF) has spread in sports practice both at elite and recreational level for its benefits being similar to those observed in stretching. The objective of the following study is to compare the results of the Roller-Foam with the proprioceptive neuromuscular facilitation stretching (PNF) (one of the stretchings with more evidence) on the hamstring muscles. Study design: The design of the study is a single-blind, randomized controlled trial and the participants are 40 healthy volunteers. Intervention: The subjects are distributed randomly in one of the following groups; stretching (PNF) intervention group: 4 repetitions of PNF stretching (5seconds of contraction, 5 second of relaxation, 20 second stretch), Roller-Foam intervention group: 2 minutes of Roller-Foam was realized on the hamstring muscles. Main outcome measures: hamstring muscles flexibility was assessed at the beginning, during (30’’ of intervention) and the end of the session by using the Modified Sit and Reach test (MSR). Results: The baseline results data given in both groups are comparable to each other. The PNF group obtained an increase in flexibility of 3,1 cm at 30 seconds (first series) and of 5,1 cm at 2 minutes (the last of all series). The RF group obtained a 0,6 cm difference at 30 seconds and 2,4 cm after 2 minutes of application of roller foam. The results were statistically significant when comparing intragroups but not intergroups. Conclusions: Despite the fact that the use of roller foam is spreading in the sports and rehabilitation field, the results of the present study suggest that the gain of flexibility on the hamstrings is greater if PNF type stretches are used instead of RF. These results may be due to the fact that the use of roller foam intervened more in the fascial tissue, while the stretches intervene more in the myotendinous unit. Future studies are needed, increasing the sample number and diversifying the types of stretching.

Keywords: hamstring muscle, stretching, neuromuscular facilitation stretching, roller foam

Procedia PDF Downloads 177
12890 Procedure to Optimize the Performance of Chemical Laser Using the Genetic Algorithm Optimizations

Authors: Mohammedi Ferhate

Abstract:

This work presents details of the study of the entire flow inside the facility where the exothermic chemical reaction process in the chemical laser cavity is analyzed. In our paper we will describe the principles of chemical lasers where flow reversal is produced by chemical reactions. We explain the device for converting chemical potential energy laser energy. We see that the phenomenon thus has an explosive trend. Finally, the feasibility and effectiveness of the proposed method is demonstrated by computer simulation

Keywords: genetic, lasers, nozzle, programming

Procedia PDF Downloads 79
12889 Ethanol in Carbon Monoxide Intoxication: Focus on Delayed Neuropsychological Sequelae

Authors: Hyuk-Hoon Kim, Young Gi Min

Abstract:

Background: In carbon monoxide (CO) intoxication, the pathophysiology of delayed neurological sequelae (DNS) is very complex and remains poorly understood. And predicting whether patients who exhibit resolved acute symptoms have escaped or will experience DNS represents a very important clinical issue. Brain magnetic resonance (MR) imaging has been conducted to assess the severity of brain damage as an objective method to predict prognosis. And co-ingestion of a second poison in patients with intentional CO poisoning occurs in almost one-half of patients. Among patients with co-ingestions, 66% ingested ethanol. We assessed the effects of ethanol on neurologic sequelae prevalence in acute CO intoxication by means of abnormal lesion in brain MR. Method: This study was conducted retrospectively by collecting data for patients who visited an emergency medical center during a period of 5 years. The enrollment criteria were diagnosis of acute CO poisoning and the measurement of the serum ethanol level and history of taking a brain MR during admission period. Official readout data by radiologist are used to decide whether abnormal lesion is existed or not. The enrolled patients were divided into two groups: patients with abnormal lesion and without abnormal lesion in Brain MR. A standardized extraction using medical record was performed; Mann Whitney U test and logistic regression analysis were performed. Result: A total of 112 patients were enrolled, and 68 patients presented abnormal brain lesion on MR. The abnormal brain lesion group had lower serum ethanol level (mean, 20.14 vs 46.71 mg/dL) (p-value<0.001). In addition, univariate logistic regression analysis showed the serum ethanol level (OR, 0.99; 95% CI, 0.98 -1.00) was independently associated with the development of abnormal lesion in brain MR. Conclusion: Ethanol could have neuroprotective effect in acute CO intoxication by sedative effect in stressful situation and mitigative effect in neuro-inflammatory reaction.

Keywords: carbon monoxide, delayed neuropsychological sequelae, ethanol, intoxication, magnetic resonance

Procedia PDF Downloads 243
12888 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 144
12887 Thermosonic Devulcanization of Waste Ground Rubber Tires by Quaternary Ammonium-Based Ternary Deep Eutectic Solvents and the Effect of α-Hydrogen

Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid

Abstract:

Landfills, water contamination, and toxic gas emission are a few impacts faced by the environment due to the increasing number of αof waste rubber tires (WRT). In spite of such concerning issue, only minimal efforts are taken to reclaim or recycle these wastes as their products are generally not-profitable for companies. Unlike the typical reclamation process, devulcanization is a method to selectively cleave sulfidic bonds within vulcanizates to avoid polymeric scissions that compromise elastomer’s mechanical and tensile properties. The process also produces devulcanizates that are re-processable similar to virgin rubber. Often, a devulcanizing agent is needed. In the current study, novel and sustainable ammonium chloride-based ternary deep eutectic solvents (TDES), with a different number of α-hydrogens, were utilised to devulcanize ground rubber tire (GRT) as an effort to implement green chemistry to tackle such issue. 40-mesh GRT were soaked for 1 day with different TDESs and sonicated at 37-80 kHz for 60-120 mins and heated at 100-140oC for 30-90 mins. Devulcanizates were then filtered, dried, and evaluated based on the percentage of by means of Flory-Rehner calculation and swelling index. The result shows that an increasing number of α-Hs increases the degree of devulcanization, and the value achieved was around eighty-percent, thirty percent higher than the typical industrial-autoclave method. Resulting bondages of devulcanizates were also analysed by Fourier transform infrared spectrometer (FTIR), Horikx fitting, and thermogravimetric analyser (TGA). The earlier two confirms only sulfidic scissions were experienced by GRT through the treatment, while the latter proves the absence or negligibility of carbon-chains scission.

Keywords: ammonium, sustainable, deep eutectic solvent, α-hydrogen, waste rubber tire

Procedia PDF Downloads 109
12886 Measurement of CES Production Functions Considering Energy as an Input

Authors: Donglan Zha, Jiansong Si

Abstract:

Because of its flexibility, CES attracts much interest in economic growth and programming models, and the macroeconomics or micro-macro models. This paper focuses on the development, estimating methods of CES production function considering energy as an input. We leave for future research work of relaxing the assumption of constant returns to scale, the introduction of potential input factors, and the generalization method of the optimal nested form of multi-factor production functions.

Keywords: bias of technical change, CES production function, elasticity of substitution, energy input

Procedia PDF Downloads 268
12885 Change in Self-Reported Personality in Students of Acting

Authors: Nemanja Kidzin, Danka Puric

Abstract:

Recently, the field of personality change has received an increasing amount of attention. Previously under-researched variables, such as the intention to change or taking on new social roles (in a working environment, education, family, etc.), have been shown to be relevant for personality change. Following this line of research, our study aimed to determine whether the process of acting can bring about personality changes in students of acting and, if yes, in which way. We hypothesized that there will be a significant difference between self-reported personality traits of students acting at the beginning and the end of preparing for a role. Additionally, as potential moderator variables, we measured the reported personality traits of the roles the students were acting, as well as empathy, disintegration, and years of formal education. The sample (N = 47) was composed of students of acting from the Faculty of Dramatic Arts (first- to fourth-year) and the Faculty of Modern Arts (first-year students only). Participants' mean age was 20.2 (SD = 1.47), and there were 64% of females. The procedure included two waves of testing (T1 at the beginning and T2 at the end of the semester), and students’ acting exercises and character immersion comprised the pseudo-experimental procedure. Students’ personality traits (HEXACO-60, self-report version), empathy (Questionnaire of Cognitive and Affective Empathy, QCAE), and disintegration (DELTA9, 10-item version) were measured at both T1 and T2, while the personality of the role (HEXACO-60 observer version) was measured at T2. Responses to all instruments were given on a 5-point Likert scale. A series of repeated-measures T-tests showed significant differences in emotionality (t(46) = 2.56, p = 0.014) and conscientiousness (t(46) = -2.39, p = 0.021) between T1 and T2. Moreover, an index of absolute personality change was significantly different from 0 for all traits (range .53 to .34, t(46) = 4.20, p < .001 for the lowest index. The average test-retest correlation for HEXACO traits was 0.57, which is lower than proposed by other similar researches. As for moderator variables, neither the personality of the role nor empathy or disintegration explained the change in students’ personality traits. The magnitude of personality change was the highest in fourth-year students, with no significant differences between the remaining three years of studying. Overall, our results seem to indicate some personality changes in students of acting. However, these changes cannot be unequivocally related to the process of preparing for a role. Further and methodologically stricter research is needed to unravel the role of acting in personality change.

Keywords: theater, personality change, acting, HEXACO

Procedia PDF Downloads 160
12884 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 173
12883 Passenger Preferences on Airline Check-In Methods: Traditional Counter Check-In Versus Common-Use Self-Service Kiosk

Authors: Cruz Queen Allysa Rose, Bautista Joymeeh Anne, Lantoria Kaye, Barretto Katya Louise

Abstract:

The study presents the preferences of passengers on the quality of service provided by the two airline check-in methods currently present in airports-traditional counter check-in and common-use self-service kiosks. Since a study has shown that airlines perceive self-service kiosks alone are sufficient enough to ensure adequate services and customer satisfaction, and in contrast, agents and passengers stated that it alone is not enough and that human interaction is essential. In reference with former studies that established opposing ideas about the choice of the more favorable airline check-in method to employ, it is the purpose of this study to present a recommendation that shall somehow fill-in the gap between the conflicting ideas by means of comparing the perceived quality of service through the RATER model. Furthermore, this study discusses the major competencies present in each method which are supported by the theories–FIRO Theory of Needs upholding the importance of inclusion, control and affection, and the Queueing Theory which points out the discipline of passengers and the length of the queue line as important factors affecting quality service. The findings of the study were based on the data gathered by the researchers from selected Thomasian third year and fourth year college students currently enrolled in the first semester of the academic year 2014-2015, who have already experienced both airline check-in methods through the implication of a stratified probability sampling. The statistical treatments applied in order to interpret the data were mean, frequency, standard deviation, t-test, logistic regression and chi-square test. The final point of the study revealed that there is a greater effect in passenger preference concerning the satisfaction experienced in common-use self-service kiosks in comparison with the application of the traditional counter check-in.

Keywords: traditional counter check-in, common-use self-service Kiosks, airline check-in methods

Procedia PDF Downloads 397
12882 E-Learning Platform for School Kids

Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.

Abstract:

E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.

Keywords: math, education games, e-learning platform, artificial intelligence

Procedia PDF Downloads 139
12881 Comparison of Susceptibility to Measles in Preterm Infants versus Term Infants

Authors: Joseph L. Mathew, Shourjendra N. Banerjee, R. K. Ratho, Sourabh Dutta, Vanita Suri

Abstract:

Background: In India and many other developing countries, a single dose of measles vaccine is administered to infants at 9 months of age. This is based on the assumption that maternal transplacentally transferred antibodies will protect infants until that age. However, our previous data showed that most infants lose maternal anti-measles antibodies before 6 months of age, making them susceptible to measles before vaccination at 9 months. Objective: This prospective study was designed to compare susceptibility in pre-term vs term infants, at different time points. Material and Methods: Following Institutional Ethics Committee approval and a formal informed consent process, venous blood was drawn from a cohort of 45 consecutive term infants and 45 consecutive pre-term infants (both groups delivered by the vaginal route); at birth, 3 months, 6 months and 9 months (prior to measles vaccination). Serum was separated and anti-measles IgG antibody levels were measured by quantitative ELISA kits (with sensitivity and specificity > 95%). Susceptibility to measles was defined as antibody titre < 200mIU/ml. The mean antibody levels were compared between the two groups at the four time points. Results: The mean gestation of term babies was 38.5±1.2 weeks; and pre-term babies 34.7±2.8 weeks. The respective mean birth weights were 2655±215g and 1985±175g. Reliable maternal vaccination record was available in only 7 of the 90 mothers. Mean anti-measles IgG antibody (±SD) in terms babies was 3165±533 IU/ml at birth, 1074±272 IU/ml at 3 months, 314±153 IU/ml at 6 months, and 68±21 IU/ml at 9 months. The corresponding levels in pre-term babies were 2875±612 IU/ml, 948±377 IU/ml, 265±98 IU/ml, and 72±33 IU/ml at 9 months (p > 0.05 for all inter-group comparisons). The proportion of susceptible term infants at birth, 3months, 6months and 9months was 0%, 16%, 67% and 96%. The corresponding proportions in the pre-term infants were 0%, 29%, 82%, and 100% (p > 0.05 for all inter-group comparisons). Conclusion: Majority of infants are susceptible to measles before 9 months of age suggesting the need to anticipate measles vaccination, but there was no statistically significant difference between the proportion of susceptible term and pre-term infants, at any of the four-time points. A larger study is required to confirm these findings and compare sero-protection if vaccination is anticipated to be administered between 6 and 9 months.

Keywords: measles, preterm, susceptibility, term infant

Procedia PDF Downloads 256
12880 Customer Segmentation Revisited: The Case of the E-Tailing Industry in Emerging Market

Authors: Sanjeev Prasher, T. Sai Vijay, Chandan Parsad, Abhishek Banerjee, Sahakari Nikhil Krishna, Subham Chatterjee

Abstract:

With rapid rise in internet retailing, the industry is set for a major implosion. Due to the little difference among competitors, companies find it difficult to segment and target the right shoppers. The objective of the study is to segment Indian online shoppers on the basis of the factors – website characteristics and shopping values. Together, these cover extrinsic and intrinsic factors that affect shoppers as they visit web retailers. Data were collected using questionnaire from 319 Indian online shoppers, and factor analysis was used to confirm the factors influencing the shoppers in their selection of web portals. Thereafter, cluster analysis was applied, and different segments of shoppers were identified. The relationship between income groups and online shoppers’ segments was tracked using correspondence analysis. Significant findings from the study include that web entertainment and informativeness together contribute more than fifty percent of the total influence on the web shoppers. Contrary to general perception that shoppers seek utilitarian leverages, the present study highlights the preference for fun, excitement, and entertainment during browsing of the website. Four segments namely Information Seekers, Utility Seekers, Value Seekers and Core Shoppers were identified and profiled. Value seekers emerged to be the most dominant segment with two-fifth of the respondents falling for hedonic as well as utilitarian shopping values. With overlap among the segments, utilitarian shopping value garnered prominence with more than fifty-eight percent of the total respondents. Moreover, a strong relation has been established between the income levels and the segments of Indian online shoppers. Web shoppers show different motives from being utility seekers to information seekers, core shoppers and finally value seekers as income levels increase. Companies can strategically use this information for target marketing and align their web portals accordingly. This study can further be used to develop models revolving around satisfaction, trust and customer loyalty.

Keywords: online shopping, shopping values, effectiveness of information content, web informativeness, web entertainment, information seekers, utility seekers, value seekers, core shoppers

Procedia PDF Downloads 182
12879 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 147
12878 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow

Authors: Musa Akdere, Gunnar Seide, Thomas Gries

Abstract:

Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.

Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface

Procedia PDF Downloads 255