Search results for: Magnitude Classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1402

Search results for: Magnitude Classification

82 Designing a Rescue System for Earthquake-Stricken Area with the Aim of Facilitation and Accelerating Accessibilities (Case Study: City of Tehran)

Authors: Naeleh Motamedi, Masoud Mahmoudkhan Shirazi, Nima Nouraei

Abstract:

Natural disasters, including earthquake, kill many people around the world every year. Society rescue actions, which start after the earthquake and are called LAST in abbreviation, include locating, access, stabilization and transportation. In the present article, we have studied the process of local accessibility to the injured and transporting them to health care centers. With regard the heavy traffic load due to earthquake, the destruction of connecting roads and bridges and the heavy debris in alleys and street, which put the lives of the injured and the people buried under the debris in danger, accelerating the rescue actions and facilitating the accessibilities are of great importance, obviously. Tehran, the capital of Iran, is among the crowded cities in the world and is the center of extensive economic, political, cultural and social activities. Tehran has a population of about 9.5 millions and because of the immigration of people from the surrounding cities. Furthermore, considering the fact that Tehran is located on two important and large faults, a 6 Richter magnitude earthquake in this city could lead to the greatest catastrophe during the entire human history. The present study is a kind of review and a major part of the required information for it, has been obtained from libraries all of the rescue vehicles around the world, including rescue helicopters, ambulances, fire fighting vehicles and rescue boats, and their applied technology, and also the robots specifically designed for the rescue system and the advantages and disadvantages of them, have been investigated. The studies show that there is a significant relationship between the rescue team-s arrival time at the incident zone and the number of saved people; so that, if the duration of burial under debris 30 minutes, the probability of survival is %99.3, after a day is %81, after 2days is %19 and after 5days is %7.4. The exiting transport systems all have some defects. If these defects are removed, more people could be saved each hour and the preparedness against natural disasters is increased. In this study, transport system has been designed for the rescue team and the injured; which could carry the rescue team to the incident zone and the injured to the health care centers. In addition, this system is able to fly in the air and move on the earth as well; so that the destruction of roads and the heavy traffic load could not prevent the rescue team from arriving early at the incident zone. The system also has the equipment required firebird for debris removing, optimum transport of the injured and first aid.

Keywords: earthquake, accelerating, accessibilities transportation, rescue system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
81 Invasion of Pectinatella magnifica in Freshwater Resources of the Czech Republic

Authors: J. Pazourek, K. Šmejkal, P. Kollár, J. Rajchard, J. Šinko, Z. Balounová, E. Vlková, H. Salmonová

Abstract:

Pectinatella magnifica (Leidy, 1851) is an invasive freshwater animal that lives in colonies. A colony of Pectinatella magnifica (a gelatinous blob) can be up to several feet in diameter large and under favorable conditions it exhibits an extreme growth rate. Recently European countries around rivers of Elbe, Oder, Danube, Rhine and Vltava have confirmed invasion of Pectinatella magnifica, including freshwater reservoirs in South Bohemia (Czech Republic). Our project (Czech Science Foundation, GAČR P503/12/0337) is focused onto biology and chemistry of Pectinatella magnifica. We monitor the organism occurrence in selected South Bohemia ponds and sandpits during the last years, collecting information about physical properties of surrounding water, and sampling the colonies for various analyses (classification, maps of secondary metabolites, toxicity tests). Because the gelatinous matrix is during the colony lifetime also a host for algae, bacteria and cyanobacteria (co-habitants), in this contribution, we also applied a high performance liquid chromatography (HPLC) method for determination of potentially present cyanobacterial toxins (microcystin-LR, microcystin-RR, nodularin). Results from the last 3-year monitoring show that these toxins are under limit of detection (LOD), so that they do not represent a danger yet. The final goal of our study is to assess toxicity risks related to fresh water resources invaded by Pectinatella magnifica, and to understand the process of invasion, which can enable to control it.

Keywords: Cyanobacteria, freshwater resources, Pectinatella magnifica invasion, toxicity monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876
80 Identifying Areas on the Pavement Where Rain Water Runoff Affects Motorcycle Behavior

Authors: Panagiotis Lemonakis, Theodoros Αlimonakis, George Kaliabetsos, Nikos Eliou

Abstract:

It is very well known that certain vertical and longitudinal slopes have to be assured in order to achieve adequate rainwater runoff from the pavement. The selection of longitudinal slopes, between the turning points of the vertical curves that meet the afore-mentioned requirement does not ensure adequate drainage because the same condition must also be applied at the transition curves. In this way none of the pavement edges’ slopes (as well as any other spot that lie on the pavement) will be opposite to the longitudinal slope of the rotation axis. Horizontal and vertical alignment must be properly combined in order to form a road which resultant slope does not take small values and hence, checks must be performed in every cross section and every chainage of the road. The present research investigates the rain water runoff from the road surface in order to identify the conditions under which, areas of inadequate drainage are being created, to analyze the rainwater behavior in such areas, to provide design examples of good and bad drainage zones and to track down certain motorcycle types which might encounter hazardous situations due to the presence of water film between the pavement and both of their tires resulting loss of traction. Moreover, it investigates the combination of longitudinal and cross slope values in critical pavement areas. It should be pointed out that the drainage gradient is analytically calculated for the whole road width and not just for an oblique slope per chainage (combination of longitudinal grade and cross slope). Lastly, various combinations of horizontal and vertical design are presented, indicating the crucial zones of bad pavement drainage. The key conclusion of the study is that any type of motorcycle will travel for some time inside the area of improper runoff for a certain time frame which depends on the speed and the trajectory that the rider chooses along the transition curve. Taking into account that on this section the rider will have to lean his motorcycle and hence reduce the contact area of his tire with the pavement it is apparent that any variations on the friction value due to the presence of a water film may lead to serious problems regarding his safety. The water runoff from the road pavement is improved when between reverse longitudinal slopes, crest instead of sag curve is chosen and particularly when its edges coincide with the edges of the horizontal curve. Lastly, the results of the investigation have shown that the variation of the longitudinal slope involves the vertical shift of the center of the poor water runoff area. The magnitude of this area increases as the length of the transition curve increases.

Keywords: Drainage, motorcycle safety, superelevation, transition curves, vertical grade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 536
79 An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization

Authors: Ahmed Rekik, Mourad Zribi, Ahmed Ben Hamida, Mohamed Benjelloun

Abstract:

This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.

Keywords: Unsupervised classification, Pearson system, Satellite image, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
78 Quantification of E-Waste: A Case Study in Federal University of Espírito Santo, Brazil

Authors: Andressa S. T. Gomes, Luiza A. Souza, Luciana H. Yamane, Renato R. Siman

Abstract:

The segregation of waste of electrical and electronic equipment (WEEE) in the generating source, its characterization (quali-quantitative) and identification of origin, besides being integral parts of classification reports, are crucial steps to the success of its integrated management. The aim of this paper was to count WEEE generation at the Federal University of Espírito Santo (UFES), Brazil, as well as to define sources, temporary storage sites, main transportations routes and destinations, the most generated WEEE and its recycling potential. Quantification of WEEE generated at the University in the years between 2010 and 2015 was performed using data analysis provided by UFES’s sector of assets management. EEE and WEEE flow in the campuses information were obtained through questionnaires applied to the University workers. It was recorded 6028 WEEEs units of data processing equipment disposed by the university between 2010 and 2015. Among these waste, the most generated were CRT screens, desktops, keyboards and printers. Furthermore, it was observed that these WEEEs are temporarily stored in inappropriate places at the University campuses. In general, these WEEE units are donated to NGOs of the city, or sold through auctions (2010 and 2013). As for recycling potential, from the primary processing and further sale of printed circuit boards (PCB) from the computers, the amount collected could reach U$ 27,839.23. The results highlight the importance of a WEEE management policy at the University.

Keywords: Solid waste, waste of electric and electronic equipment, waste management, institutional generation of solid waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
77 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
76 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
75 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
74 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall

Abstract:

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Keywords: Emergency management, Sydney, tide-tsunami interaction, tsunami impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1205
73 A Strategic Sustainability Analysis of Electric Vehicles in EU Today and Towards 2050

Authors: Sven Borén, Henrik Ny

Abstract:

Ambitions within the EU for moving towards sustainable transport include major emission reductions for fossil fuel road vehicles, especially for buses, trucks, and cars. The electric driveline seems to be an attractive solution for such development. This study first applied the Framework for Strategic Sustainable Development to compare sustainability effects of today’s fossil fuel vehicles with electric vehicles that have batteries or hydrogen fuel cells. The study then addressed a scenario were electric vehicles might be in majority in Europe by 2050. The methodology called Strategic Lifecycle Assessment was first used, were each life cycle phase was assessed for violations against sustainability principles. This indicates where further analysis could be done in order to quantify the magnitude of each violation, and later to create alternative strategies and actions that lead towards sustainability. A Life Cycle Assessment of combustion engine cars, plug-in hybrid cars, battery electric cars and hydrogen fuel cell cars was then conducted to compare and quantify environmental impacts. The authors found major violations of sustainability principles like use of fossil fuels, which contribute to the increase of emission related impacts such as climate change, acidification, eutrophication, ozone depletion, and particulate matters. Other violations were found, such as use of scarce materials for batteries and fuel cells, and also for most life cycle phases for all vehicles when using fossil fuel vehicles for mining, production and transport. Still, the studied current battery and hydrogen fuel cell cars have less severe violations than fossil fuel cars. The life cycle assessment revealed that fossil fuel cars have overall considerably higher environmental impacts compared to electric cars as long as the latter are powered by renewable electricity. By 2050, there will likely be even more sustainable alternatives than the studied electric vehicles when the EU electricity mix mainly should stem from renewable sources, batteries should be recycled, fuel cells should be a mature technology for use in vehicles (containing no scarce materials), and electric drivelines should have replaced combustion engines in other sectors. An uncertainty for fuel cells in 2050 is whether the production of hydrogen will have had time to switch to renewable resources. If so, that would contribute even more to a sustainable development. Except for being adopted in the GreenCharge roadmap, the authors suggest that the results can contribute to planning in the upcoming decades for a sustainable increase of EVs in Europe, and potentially serve as an inspiration for other smaller or larger regions. Further studies could map the environmental effects in LCA further, and include other road vehicles to get a more precise perception of how much they could affect sustainable development.

Keywords: Strategic, electric vehicles, fuel cell, LCA, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3743
72 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: Fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688
71 Land Suitability Prediction Modelling for Agricultural Crops Using Machine Learning Approach: A Case Study of Khuzestan Province, Iran

Authors: Saba Gachpaz, Hamid Reza Heidari

Abstract:

The sharp increase in population growth leads to more pressure on agricultural areas to satisfy the food supply. This necessitates increased resource consumption and underscores the importance of addressing sustainable agriculture development along with other environmental considerations. Land-use management is a crucial factor in obtaining optimum productivity. Machine learning is a widely used technique in the agricultural sector, from yield prediction to customer behavior. This method focuses on learning and provides patterns and correlations from our data set. In this study, nine physical control factors, namely, soil classification, electrical conductivity, normalized difference water index (NDWI), groundwater level, elevation, annual precipitation, pH of water, annual mean temperature, and slope in the alluvial plain in Khuzestan (an agricultural hotspot in Iran) are used to decide the best agricultural land use for both rainfed and irrigated agriculture for 10 different crops. For this purpose, each variable was imported into Arc GIS, and a raster layer was obtained. In the next level, by using training samples, all layers were imported into the python environment. A random forest model was applied, and the weight of each variable was specified. In the final step, results were visualized using a digital elevation model, and the importance of all factors for each one of the crops was obtained. Our results show that despite 62% of the study area being allocated to agricultural purposes, only 42.9% of these areas can be defined as a suitable class for cultivation purposes.

Keywords: Land suitability, machine learning, random forest, sustainable agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 283
70 LIDAR Obstacle Warning and Avoidance System for Unmanned Aircraft

Authors: Roberto Sabatini, Alessandro Gardi, Mark A. Richardson

Abstract:

The availability of powerful eye-safe laser sources and the recent advancements in electro-optical and mechanical beam-steering components have allowed laser-based Light Detection and Ranging (LIDAR) to become a promising technology for obstacle warning and avoidance in a variety of manned and unmanned aircraft applications. LIDAR outstanding angular resolution and accuracy characteristics are coupled to its good detection performance in a wide range of incidence angles and weather conditions, providing an ideal obstacle avoidance solution, which is especially attractive in low-level flying platforms such as helicopters and small-to-medium size Unmanned Aircraft (UA). The Laser Obstacle Avoidance Marconi (LOAM) system is one of such systems, which was jointly developed and tested by SELEX-ES and the Italian Air Force Research and Flight Test Centre. The system was originally conceived for military rotorcraft platforms and, in this paper, we briefly review the previous work and discuss in more details some of the key development activities required for integration of LOAM on UA platforms. The main hardware and software design features of this LOAM variant are presented, including a brief description of the system interfaces and sensor characteristics, together with the system performance models and data processing algorithms for obstacle detection, classification and avoidance. In particular, the paper focuses on the algorithm proposed for optimal avoidance trajectory generation in UA applications.

Keywords: LIDAR, Low-Level Flight, Nap-of-the-Earth Flight, Near Infra-Red, Obstacle Avoidance, Obstacle Detection, Obstacle Warning System, Sense and Avoid, Trajectory Optimisation, Unmanned Aircraft.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7085
69 Advanced Stochastic Models for Partially Developed Speckle

Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije

Abstract:

Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.

Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
68 Influence of Compactive Efforts on Cement- Bagasse Ash Treatment of Expansive Black Cotton Soil

Authors: Moses, G, Osinubi, K. J.

Abstract:

A laboratory study on the influence of compactive effort on expansive black cotton specimens treated with up to 8% ordinary Portland cement (OPC) admixed with up to 8% bagasse ash (BA) by dry weight of soil and compacted using the energies of the standard Proctor (SP), West African Standard (WAS) or “intermediate” and modified Proctor (MP) were undertaken. The expansive black cotton soil was classified as A-7-6 (16) or CL using the American Association of Highway and Transportation Officials (AASHTO) and Unified Soil Classification System (USCS), respectively. The 7day unconfined compressive strength (UCS) values of the natural soil for SP, WAS and MP compactive efforts are 286, 401 and 515kN/m2 respectively, while peak values of 1019, 1328 and 1420kN/m2 recorded at 8% OPC/ 6% BA, 8% OPC/ 2% BA and 6% OPC/ 4% BA treatments, respectively were less than the UCS value of 1710kN/m2 conventionally used as criterion for adequate cement stabilization. The soaked California bearing ratio (CBR) values of the OPC/BA stabilized soil increased with higher energy level from 2, 4 and 10% for the natural soil to Peak values of 55, 18 and 8% were recorded at 8% OPC/4% BA 8% OPC/2% BA and 8% OPC/4% BA, treatments when SP, WAS and MP compactive effort were used, respectively. The durability of specimens was determined by immersion in water. Soils treatment at 8% OPC/ 4% BA blend gave a value of 50% resistance to loss in strength value which is acceptable because of the harsh test condition of 7 days soaking period specimens were subjected instead of the 4 days soaking period that specified a minimum resistance to loss in strength of 80%. Finally An optimal blend of is 8% OPC/ 4% BA is recommended for treatment of expansive black cotton soil for use as a sub-base material.

Keywords: Bagasse ash, California bearing ratio, Compaction, Durability, Ordinary Portland cement, Unconfined compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3563
67 Obesity and Bone Mineral Density in Patients with Large Joint Osteoarthritis

Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Zaverukha, Roksolana Povoroznyuk

Abstract:

Along with the global aging of population, the number of people with somatic diseases is increasing, including such interrelated pathologies as obesity, osteoarthritis (OA) and osteoporosis (OP). The objective of the study is to examine the connection between body mass index (BMI), OA and bone mineral density (BMD) of lumbar spine, femoral neck and trabecular bone score (TBS) in postmenopausal women with OA. We have observed 359 postmenopausal women (50-89 years old) and divided them into four groups by age: 50-59 yrs, 60-69 yrs, 70-79 yrs and over 80 years old. In addition, according to the American College of Rheumatology (ACR) Clinical classification criteria for knee and hip OA, we divided them into 2 groups: group I – 117 females with symptomatic OA (including 89 patients with knee OA, 28 patients with hip OA) and group II –242 women with a normal functional activity of large joints. Analysis of data was performed taking into account their BMI, classified by World Health Organization (WHO). Diagnosis of obesity was established when BMI was above 30 kg/m2. In woman with obesity, a symptomatic OA was detected in 44 postmenopausal women (41.1%), a normal functional activity of large joints - in 63 women (58.9%). However, in women with normal BMI – 73 women, who account for 29.0% of cases, a symptomatic OA was detected. According to a chi-squared (χ2) test, a significantly higher level of BMI was detected in postmenopausal women with OA (χ2 = 5.05, p = 0.02). Women with a symptomatic OA had a significantly higher BMD of lumbar spine compared with women who had a normal functional activity of large joints. No significant differences of BMD of femoral necks or TBS were detected in either the group with OA or with a normal functional activity of large joints.

Keywords: Bone mineral density, BMD, body mass index, BMI, obesity, overweight, postmenopausal women, osteoarthritis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
66 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408
65 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection

Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi

Abstract:

It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, hybrid, filter-wrapper, phishing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178
64 Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

Authors: Mahmoud M. S. Albattah

Abstract:

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

Keywords: Characteristic straight line method, dynamic height, landslides, orthometric height, systematic errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567
63 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis

Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic

Abstract:

What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.

Keywords: Political tendency, prediction, sentiment analysis, Twitter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 848
62 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: Deep learning, data mining, gender predication, MOOCs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362
61 Semantic Enhanced Social Media Sentiments for Stock Market Prediction

Authors: K. Nirmala Devi, V. Murali Bhaskaran

Abstract:

Traditional document representation for classification follows Bag of Words (BoW) approach to represent the term weights. The conventional method uses the Vector Space Model (VSM) to exploit the statistical information of terms in the documents and they fail to address the semantic information as well as order of the terms present in the documents. Although, the phrase based approach follows the order of the terms present in the documents rather than semantics behind the word. Therefore, a semantic concept based approach is used in this paper for enhancing the semantics by incorporating the ontology information. In this paper a novel method is proposed to forecast the intraday stock market price directional movement based on the sentiments from Twitter and money control news articles. The stock market forecasting is a very difficult and highly complicated task because it is affected by many factors such as economic conditions, political events and investor’s sentiment etc. The stock market series are generally dynamic, nonparametric, noisy and chaotic by nature. The sentiment analysis along with wisdom of crowds can automatically compute the collective intelligence of future performance in many areas like stock market, box office sales and election outcomes. The proposed method utilizes collective sentiments for stock market to predict the stock price directional movements. The collective sentiments in the above social media have powerful prediction on the stock price directional movements as up/down by using Granger Causality test.

Keywords: Bag of Words, Collective Sentiments, Ontology, Semantic relations, Sentiments, Social media, Stock Prediction, Twitter, Vector Space Model and wisdom of crowds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2800
60 Influence of Drought on Yield and Yield Components in White Bean

Authors: Gholamreza Habibi

Abstract:

In order to study seed yield and seed yield components in bean under reduced irrigation condition and assessment drought tolerance of genotypes, 15 lines of White beans were evaluated in two separate RCB design with 3 replications under stress and non stress conditions. Analysis of variance showed that there were significant differences among varieties in terms of traits under study, indicating the existence of genetic variation among varieties. The results indicate that drought stress reduced seed yield, number of seed per plant, biological yield and number of pod in White been. In non stress condition, yield was highly correlated with the biological yield, whereas in stress condition it was highly correlated with harvest index. Results of stepwise regression showed that, selection can we done based on, biological yield, harvest index, number of seed per pod, seed length, 100 seed weight. Result of path analysis showed that the highest direct effect, being positive, was related to biological yield in non stress and to harvest index in stress conditions. Factor analysis were accomplished in stress and nonstress condition a, there were 4 factors that explained more than 76 percent of total variations. We used several selection indices such as Stress Susceptibility Index ( SSI ), Geometric Mean Productivity ( GMP ), Mean Productivity ( MP ), Stress Tolerance Index ( STI ) and Tolerance Index ( TOL ) to study drought tolerance of genotypes, we found that the best Stress Index for selection tolerance genotypes were STI, GMP and MP were the greatest correlations between these Indices and seed yield under stress and non stress conditions. In classification of genotypes base on phenotypic characteristics, using cluster analysis ( UPGMA ), all allels classified in 5 separate groups in stress and non stress conditions.

Keywords: Cluster analysis, factor analysis, path analysis, selection index, White bean

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2140
59 International Tourists’ Travel Motivation by Push-Pull Factors and the Decision Making for Selecting Thailand as Destination Choice

Authors: Siripen Yiamjanya, Kevin Wongleedee

Abstract:

This research paper aims to identify travel motivation by push and pull factors that affected decision making of international tourists in selecting Thailand as their destination choice. A total of 200 international tourists who traveled to Thailand during January and February, 2014 were used as the sample in this study. A questionnaire was employed as a tool in collecting the data, conducted in Bangkok. The list consisted of 30 attributes representing both psychological factors as “push- based factors” and destination factors as “pull-based factors”. Mean and standard deviation were used in order to find the top ten travel motives that were important determinants in the respondents’ decision making process to select Thailand as their destination choice. The finding revealed the top ten travel motivations influencing international tourists to select Thailand as their destination choice included [i] getting experience in foreign land; [ii] Thai food; [iii] learning new culture; [iv] relaxing in foreign land; [v] wanting to learn new things; [vi] being interested in Thai culture, and traditional markets; [vii] escaping from same daily life; [viii] enjoying activities; [ix] adventure; and [x] good weather. Classification of push- based and pull- based motives suggested that getting experience in foreign land was the most important push motive for international tourists to travel, while Thai food portrayed its highest significance as pull motive. Discussion and suggestions were also made for tourism industry of Thailand.

Keywords: Decision Making, Destination Choice, International Tourist, Pull Factor, Push Factor, Thailand, Travel Motivation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16383
58 Experimental Investigation on Geosynthetic-Reinforced Soil Sections via California Bearing Ratio Test

Authors: S. Abdi Goudazri, R. Ziaie Moayed, A. Nazeri

Abstract:

Loose soils normally are of weak bearing capacity due to their structural nature. Being exposed to heavy traffic loads, they would fail in most cases. To tackle the aforementioned issue, geotechnical engineers have come up with different approaches; one of which is making use of geosynthetic-reinforced soil-aggregate systems. As these polymeric reinforcements have highlighted economic and environmentally-friendly features, they have become widespread in practice during the last decades. The present research investigates the efficiency of four different types of these reinforcements in increasing the bearing capacity of two-layered soil sections using a series California Bearing Ratio (CBR) test. The studied sections are comprised of a 10 cm-thick layer of no. 161 Firouzkooh sand (weak subgrade) and a 10 cm-thick layer of compacted aggregate materials (base course) classified as SP and GW according to the United Soil Classification System (USCS), respectively. The aggregate layer was compacted to the relative density (Dr) of 95% at the optimum water content (Wopt) of 6.5%. The applied reinforcements were including two kinds of geocomposites (type A and B), a geotextile, and a geogrid that were embedded at the interface of the lower and the upper layers of the soil-aggregate system. As the standard CBR mold was not appropriate in height for this study, the mold used for soaked CBR tests were utilized. To make a comparison between the results of stress-settlement behavior in the studied specimens, CBR values pertinent to the penetrations of 2.5 mm and 5 mm were considered. The obtained results demonstrated 21% and 24.5% increments in the amount of CBR value in the presence of geocomposite type A and geogrid, respectively. On the other hand, the effect of both geotextile and geocomposite type B on CBR values was generally insignificant in this research.

Keywords: Geosynthetics, geogrid, geotextile, CBR test, increasing bearing capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
57 Comparative Correlation Investigation of Polynuclear Aromatic Hydrocarbons (PAHs) in Soils of Different Land Use: Sources Evaluation Perspective

Authors: O. Onoriode Emoyan, E. Eyitemi Akporhonor, Charles Otobrise

Abstract:

Polycyclic Aromatic Hydrocarbons (PAHs) are formed mainly because of incomplete combustion of organic materials during industrial, domestic activities or natural occurrence. Their toxicity and contamination of terrestrial and aquatic ecosystem have been established. However, with limited validity index, previous research has focused on PAHs isomer pair ratios of variable physicochemical properties in source identification. The objective of this investigation was to determine the empirical validity of Pearson Correlation Coefficient (PCC) and Cluster Analysis (CA) in PAHs source identification along soil samples of different land uses. Therefore, 16 PAHs grouped, as Endocrine Disruption Substances (EDSs) were determined in 10 sample stations in top and sub soils seasonally. PAHs was determined the use of Varian 300 gas chromatograph interfaced with flame ionization detector. Instruments and reagents used are of standard and chromatographic grades respectively. PCC and CA results showed that the classification of PAHs along pyrolitic and petrogenic organics used in source signature is about the predominance PAHs in environmental matrix. Therefore, the distribution of PAHs in the studied stations revealed the presence of trace quantities of the vast majority of the sixteen PAHs, which may ultimately inhabit the actual source signature authentication. Therefore, factors to be considered when evaluating possible sources of PAHs could be; type and extent of bacterial metabolism, transformation products/substrates, and environmental factors such as salinity, pH, oxygen concentration, nutrients, light intensity, temperature, co-substrates, and environmental medium are hereby recommended as factors to be considered when evaluating possible sources of PAHs.

Keywords: Comparative correlation, kinetically, polynuclear aromatic hydrocarbons, thermodynamically- favored PAHs, sources evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984
56 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165
55 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
54 Crude Glycerol Affects Canine Sperm Motility: Computer Assisted Semen Analysis in vitro

Authors: P. Massanyi, L. Kichi, T. Slanina, E. Kolesar, J. Danko, N. Lukac, E. Tvrda, R. Stawarz, A. Kolesarova

Abstract:

Target of this study was the analysis of the impact of crude glycerol on canine spermatozoa motility, morphology, viability, and membrane integrity. Experiments were realized in vitro. In the study, semen from 5 large dog breeds was used. They were typical representatives of large breeds, coming from healthy rearing, regularly vaccinated and integrated to the further breeding. Semen collections were realized at the owners of animals and in the veterinary clinic. Subsequently the experiments were realized at the Department of Animal Physiology of the SUA in Nitra. The spermatozoa motility was evaluated using CASA analyzer (SpermVisionTM, Minitub, Germany) at the temperature 5 and 37°C for 5 hours. In the study, 13 motility parameters were evaluated. Generally, crude glycerol has generally negative effect on spermatozoa motility. Morphological analysis was realized using Hancock staining and the preparations were evaluated at magnification 1000x using classification tables of morphologically changed spermatozoa. Data clearly detected the highest number of morphologically changed spermatozoa in the experimental groups (know twisted tails, tail torso and tail coiling). For acrosome alterations swelled acrosomes, removed acrosomes and acrosomes with undulated membrane were detected. In this study also the effect of crude glycerol on spermatozoa membrane integrity were analyzed. The highest crude glycerol concentration significantly affects spermatozoa integrity. Results of this study show that crude glycerol has effect of spermatozoa motility, viability, and membrane integrity. Detected changes are related to crude glycerol concentration, temperature, as well as time of incubation.

Keywords: Dog, semen, spermatozoa, acrosome, glycerol, CASA, viability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
53 The Loess Regression Relationship Between Age and BMI for both Sydney World Masters Games Athletes and the Australian National Population

Authors: Joe Walsh, Mike Climstein, Ian Timothy Heazlewood, Stephen Burke, Jyrki Kettunen, Kent Adams, Mark DeBeliso

Abstract:

Thousands of masters athletes participate quadrennially in the World Masters Games (WMG), yet this cohort of athletes remains proportionately under-investigated. Due to a growing global obesity pandemic in context of benefits of physical activity across the lifespan, the BMI trends for this unique population was of particular interest. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approach is necessary in order to counteract the obesity pandemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship.BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual sub-groups.This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages.

Keywords: Aging, masters athlete, Quetelet Index, sport

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712