Search results for: rough sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1409

Search results for: rough sets

1259 Ultrastructural Study of Surface Topography of Trematode Parasites of Domestic Buffalo (Bubalus bubalis) in Udaipur, India

Authors: Gayatri Swarnakar

Abstract:

Paramphistomiasis and fascioliasis diseases have been prevalent due to presence of trematode parasites in the rumen and liver of domestic buffalo (Bubalus bubalis) in Udaipur, India. The trematode parasites such as Paramphistomum cervi, Gastrothylax crumenifer, Cotylophoron cotylophorum, Orthocoelium scoliocoelium, Fasciola hepatica and Fasciola gigantica were collected from infected rumen and liver of the freshly slaughtered buffaloes (Bubalus bubalis) at local zoo abattoir in Udaipur. Live trematodes were washed in normal saline, fixed in 0.2M cacodylate fixative, post fixed in osmium tetraoxide, dehydrated, dried, coated with gold sputter and observed under scanning electronic microscope (SEM). The surface tegument of Paramphistomum cervi was spineless with transverse folds, discontinuous with ridges and grooves. Two types of sensory papillae such as knob like and button shaped were also observed. The oral opening of Cotylophoron cotylophorum was surrounded by wrinkled and ridged tegument which formed concentric elevated rings. Tegument of Cotylophoron cotylophorum in acetabulum region was observed to be rough and bee-comb like structure. Genital sucker of this worm was surrounded by a tyre- shaped elevation of the tegument. Orthocoelium scoliocoelium showed circular and concentric rings of tegumental folds around the oral sucker. Genital pore had knob like papillae with radial tegumental folds. Surface topography of Fasciola gigantica and Fasciola hepatica were found to be rough due to occurrence of different types of spines, three types of sensory papillae, transverse folds and grooves. Oral and ventral suckers were spineless and covered with thick rims of transverse folds. Genital pore showed small scattered spines. Present research work would provide knowledge for ultrastructural characteristics of trematode parasites for chemotherapeutic measures and help us to evolve suitable strategy for the eradication of trematode parasites from the domestic buffalo (Bubalus bubalis).

Keywords: Domestic buffalo, tegument, trematode parasites, ultrastructure

Procedia PDF Downloads 276
1258 Water Detection in Aerial Images Using Fuzzy Sets

Authors: Caio Marcelo Nunes, Anderson da Silva Soares, Gustavo Teodoro Laureano, Clarimar Jose Coelho

Abstract:

This paper presents a methodology to pixel recognition in aerial images using fuzzy $c$-means algorithm. This algorithm is a alternative to recognize areas considering uncertainties and inaccuracies. Traditional clustering technics are used in recognizing of multispectral images of earth's surface. This technics recognize well-defined borders that can be easily discretized. However, in the real world there are many areas with uncertainties and inaccuracies which can be mapped by clustering algorithms that use fuzzy sets. The methodology presents in this work is applied to multispectral images obtained from Landsat-5/TM satellite. The pixels are joined using the $c$-means algorithm. After, a classification process identify the types of surface according the patterns obtained from spectral response of image surface. The classes considered are, exposed soil, moist soil, vegetation, turbid water and clean water. The results obtained shows that the fuzzy clustering identify the real type of the earth's surface.

Keywords: aerial images, fuzzy clustering, image processing, pattern recognition

Procedia PDF Downloads 440
1257 Two-Photon-Exchange Effects in the Electromagnetic Production of Pions

Authors: Hui-Yun Cao, Hai-Qing Zhou

Abstract:

The high precision measurements and experiments play more and more important roles in particle physics and atomic physics. To analyse the precise experimental data sets, the corresponding precise and reliable theoretical calculations are necessary. Until now, the form factors of elemental constituents such as pion and proton are still attractive issues in current Quantum Chromodynamics (QCD). In this work, the two-photon-exchange (TPE) effects in ep→enπ⁺ at small -t are discussed within a hadronic model. Under the pion dominance approximation and the limit mₑ→0, the TPE contribution to the amplitude can be described by a scalar function. We calculate TPE contributions to the amplitude, and the unpolarized differential cross section with the only elastic intermediate state is considered. The results show that the TPE corrections to the unpolarized differential cross section are about from -4% to -20% at Q²=1-1.6 GeV². After considering the TPE corrections to the experimental data sets of unpolarized differential cross section, we analyze the TPE corrections to the separated cross sections σ(L,T,LT,TT). We find that the TPE corrections (at Q²=1-1.6 GeV²) to σL are about from -10% to -30%, to σT are about 20%, and to σ(LT,TT) are much larger. By these analyses, we conclude that the TPE contributions in ep→enπ⁺ at small -t are important to extract the separated cross sections σ(L,T,LT,TT) and the electromagnetic form factor of π⁺ in the experimental analysis.

Keywords: differential cross section, form factor, hadronic, two-photon

Procedia PDF Downloads 101
1256 Design Elements: Examining Product Design Attribute That Make Sweets Appear More Delicious to Foreign Patrons

Authors: Kazuko Sakamoto, Keiichiro Kawarabayashi, Yoji Kitani

Abstract:

Japanese sweets are one of the important elements of the Chur Japan strategy. In this research, we investigated what kind of sweets are liked to the Chinese tourist. What is generally eaten is influenced by culture, a sense of values, and business practice. Therefore, what was adapted there is sold. However, when traveling, what its country does not have is called for. Then, how far should we take in Chinese people's taste in a design? This time, the design attribute (a color and a form) which leads to sweets "being delicious" was clarified by rough aggregate theory.As a result, the difference in the taste of Chinese people and Japanese people became clear.

Keywords: design attribute, international comparison, taste by appearance, design attribute

Procedia PDF Downloads 394
1255 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments

Authors: Ana Londral, Burcu Demiray, Marcus Cheetham

Abstract:

Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.

Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation

Procedia PDF Downloads 258
1254 MB-Slam: A Slam Framework for Construction Monitoring

Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han

Abstract:

Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.

Keywords: perspective alignment, progress monitoring, slam, stereo matching.

Procedia PDF Downloads 181
1253 Optimal Rest Interval between Sets in Robot-Based Upper-Arm Rehabilitation

Authors: Virgil Miranda, Gissele Mosqueda, Pablo Delgado, Yimesker Yihun

Abstract:

Muscular fatigue affects the muscle activation that is needed for producing the desired clinical outcome. Integrating optimal muscle relaxation periods into a variety of health care rehabilitation protocols is important to maximize the efficiency of the therapy. In this study, four muscle relaxation periods (30, 60, 90, and 120 seconds) and their effectiveness in producing consistent muscle activation of the muscle biceps brachii between sets of elbow flexion and extension task was investigated among a sample of 10 subjects with no disabilities. The same resting periods were then utilized in a controlled exoskeleton-based exercise for a sample size of 5 subjects and have shown similar results. On average, the muscle activity of the biceps brachii decreased by 0.3% when rested for 30 seconds, and it increased by 1.25%, 0.76%, and 0.82% when using muscle relaxation periods of 60, 90, and 120 seconds, respectively. The preliminary results suggest that a muscle relaxation period of about 60 seconds is needed for optimal continuous muscle activation within rehabilitation regimens. Robot-based rehabilitation is good to produce repetitive tasks with the right intensity, and knowing the optimal resting period will make the automation more effective.

Keywords: rest intervals, muscle biceps brachii, robot rehabilitation, muscle fatigue

Procedia PDF Downloads 155
1252 Cellular Components of the Hemal Node of Egyptian Cattle

Authors: Amira E. Derbalah, Doaa M. Zaghloul

Abstract:

10 clinically healthy hemal nodes were collected from male bulls aged 2-3 years. Light microscopy revealed a capsule of connective tissue consisted mainly of collagen fiber surrounding hemal node, numerous erythrocytes were found in wide subcapsular sinus under the capsule. The parenchyma of the hemal node was divided into cortex and medulla. Diffused lymphocytes, and lymphoid follicles, having germinal centers were the main components of the cortex, while in the medulla there was wide medullary sinus, diffused lymphocytes and few lymphoid nodules. The area occupied with lymph nodules was larger than that occupied with non-nodular structure of lymphoid cords and blood sinusoids. Electron microscopy revealed the cellular components of hemal node including elements of circulating erythrocytes intermingled with lymphocytes, plasma cells, mast cells, reticular cells, macrophages, megakaryocytes and endothelial cells lining the blood sinuses. The lymphocytes were somewhat triangular in shape with cytoplasmic processes extending between adjacent erythrocytes. Nuclei were triangular to oval in shape, lightly stained with clear nuclear membrane indentation and clear nucleoli. The reticular cells were elongated in shape with cytoplasmic processes extending between adjacent lymphocytes, rough endoplasmic reticulum, ribosomes and few lysosomes were seen in their cytoplasm. Nucleus was elongated in shape with less condensed chromatin. Plasma cells were oval to irregular in shape with numerous dilated rough endoplasmic reticulum containing electron lucent material occupying the whole cytoplasm and few mitochondria were found. Nuclei were centrally located and oval in shape with heterochromatin emarginated and often clumped near the nuclear membrane. Occasionally megakaryocytes and mast cells were seen among lymphocytes. Megakaryocytes had multilobulated nucleus and free ribosomes often appearing as small aggregates in their cytoplasm, while mast cell had their characteristic electron dense granule in the cytoplasm, few electron lucent granules were found also, we conclude that, the main function of the hemal node of cattle is proliferation of lymphocytes. No role for plasma cell in erythrophagocytosis could be suggested.

Keywords: cattle, electron microscopy, hemal node, histology, immune system

Procedia PDF Downloads 372
1251 Healthcare Big Data Analytics Using Hadoop

Authors: Chellammal Surianarayanan

Abstract:

Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.

Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare

Procedia PDF Downloads 379
1250 An Optimized Association Rule Mining Algorithm

Authors: Archana Singh, Jyoti Agarwal, Ajay Rana

Abstract:

Data Mining is an efficient technology to discover patterns in large databases. Association Rule Mining techniques are used to find the correlation between the various item sets in a database, and this co-relation between various item sets are used in decision making and pattern analysis. In recent years, the problem of finding association rules from large datasets has been proposed by many researchers. Various research papers on association rule mining (ARM) are studied and analyzed first to understand the existing algorithms. Apriori algorithm is the basic ARM algorithm, but it requires so many database scans. In DIC algorithm, less amount of database scan is needed but complex data structure lattice is used. The main focus of this paper is to propose a new optimized algorithm (Friendly Algorithm) and compare its performance with the existing algorithms A data set is used to find out frequent itemsets and association rules with the help of existing and proposed (Friendly Algorithm) and it has been observed that the proposed algorithm also finds all the frequent itemsets and essential association rules from databases as compared to existing algorithms in less amount of database scan. In the proposed algorithm, an optimized data structure is used i.e. Graph and Adjacency Matrix.

Keywords: association rules, data mining, dynamic item set counting, FP-growth, friendly algorithm, graph

Procedia PDF Downloads 389
1249 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis

Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski

Abstract:

The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.

Keywords: cloud service, geodata cube, multiresolution, raster geodata

Procedia PDF Downloads 104
1248 Frequency- and Content-Based Tag Cloud Font Distribution Algorithm

Authors: Ágnes Bogárdi-Mészöly, Takeshi Hashimoto, Shohei Yokoyama, Hiroshi Ishikawa

Abstract:

The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources to describe and organize them. Tag clouds provide rough impression of relative importance of each tag within overall cloud in order to facilitate browsing among numerous tags and resources. The goal of our paper is to enrich visualization of tag clouds. A font distribution algorithm has been proposed to calculate a novel metric based on frequency and content, and to classify among classes from this metric based on power law distribution and percentages. The suggested algorithm has been validated and verified on the tag cloud of a real-world thesis portal.

Keywords: tag cloud, font distribution algorithm, frequency-based, content-based, power law

Procedia PDF Downloads 476
1247 Linguistic Features for Sentence Difficulty Prediction in Aspect-Based Sentiment Analysis

Authors: Adrian-Gabriel Chifu, Sebastien Fournier

Abstract:

One of the challenges of natural language understanding is to deal with the subjectivity of sentences, which may express opinions and emotions that add layers of complexity and nuance. Sentiment analysis is a field that aims to extract and analyze these subjective elements from text, and it can be applied at different levels of granularity, such as document, paragraph, sentence, or aspect. Aspect-based sentiment analysis is a well-studied topic with many available data sets and models. However, there is no clear definition of what makes a sentence difficult for aspect-based sentiment analysis. In this paper, we explore this question by conducting an experiment with three data sets: ”Laptops”, ”Restaurants”, and ”MTSC” (Multi-Target-dependent Sentiment Classification), and a merged version of these three datasets. We study the impact of domain diversity and syntactic diversity on difficulty. We use a combination of classifiers to identify the most difficult sentences and analyze their characteristics. We employ two ways of defining sentence difficulty. The first one is binary and labels a sentence as difficult if the classifiers fail to correctly predict the sentiment polarity. The second one is a six-level scale based on how many of the top five best-performing classifiers can correctly predict the sentiment polarity. We also define 9 linguistic features that, combined, aim at estimating the difficulty at sentence level.

Keywords: sentiment analysis, difficulty, classification, machine learning

Procedia PDF Downloads 44
1246 The Effect of Different Strength Training Methods on Muscle Strength, Body Composition and Factors Affecting Endurance Performance

Authors: Shaher A. I. Shalfawi, Fredrik Hviding, Bjornar Kjellstadli

Abstract:

The main purpose of this study was to measure the effect of two different strength training methods on muscle strength, muscle mass, fat mass and endurance factors. Fourteen physical education students accepted to participate in this study. The participants were then randomly divided into three groups, traditional training group (TTG), cluster training group (CTG) and control group (CG). TTG consisted of 4 participants aged ( ± SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (178.3 ± 11.9 cm). CTG consisted of 5 participants aged (22.2 ± 3.5 years), body mass (81.0 ± 24.0 kg) and height (180.2 ± 12.3 cm). CG consisted of 5 participants aged (22 ± 2.8 years), body mass (77 ± 19 kg) and height (174 ± 6.7 cm). The participants underwent a hypertrophy strength training program twice a week consisting of 4 sets of 10 reps at 70% of one-repetition maximum (1RM), using barbell squat and barbell bench press for 8 weeks. The CTG performed 2 x 5 reps using 10 s recovery in between repetitions and 50 s recovery between sets, while TTG performed 4 sets of 10 reps with 90 s recovery in between sets. Pre- and post-tests were administrated to assess body composition (weight, muscle mass, and fat mass), 1RM (bench press and barbell squat) and a laboratory endurance test (Bruce Protocol). Instruments used to collect the data were Tanita BC-601 scale (Tanita, Illinois, USA), Woodway treadmill (Woodway, Wisconsin, USA) and Vyntus CPX breath-to-breath system (Jaeger, Hoechberg, Germany). Analysis was conducted at all measured variables including time to peak VO2, peak VO2, heart rate (HR) at peak VO2, respiratory exchange ratio (RER) at peak VO2, and number of breaths per minute. The results indicate an increase in 1RM performance after 8 weeks of training. The change in 1RM squat was for the TTG = 30 ± 3.8 kg, CTG = 28.6 ± 8.3 kg and CG = 10.3 ± 13.8 kg. Similarly, the change in 1RM bench press was for the TTG = 9.8 ± 2.8 kg, CTG = 7.4 ± 3.4 kg and CG = 4.4 ± 3.4 kg. The within-group analysis from the oxygen consumption measured during the incremental exercise indicated that the TTG had only a statistical significant increase in their RER from 1.16 ± 0.04 to 1.23 ± 0.05 (P < 0.05). The CTG had a statistical significant improvement in their HR at peak VO2 from 186 ± 24 to 191 ± 12 Beats Per Minute (P < 0.05) and their RER at peak VO2 from 1.11 ± 0.06 to 1.18 ±0.05 (P < 0.05). Finally, the CG had only a statistical significant increase in their RER at peak VO2 from 1.11 ± 0.07 to 1.21 ± 0.05 (P < 0.05). The between-group analysis showed no statistical differences between all groups in all the measured variables from the oxygen consumption test during the incremental exercise including changes in muscle mass, fat mass, and weight (kg). The results indicate a similar effect of hypertrophy strength training irrespective of the methods of the training used on untrained subjects. Because there were no notable changes in body-composition measures, the results suggest that the improvements in performance observed in all groups is most probably due to neuro-muscular adaptation to training.

Keywords: hypertrophy strength training, cluster set, Bruce protocol, peak VO2

Procedia PDF Downloads 222
1245 Spatio-Temporal Data Mining with Association Rules for Lake Van

Authors: Tolga Aydin, M. Fatih Alaeddinoğlu

Abstract:

People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.

Keywords: apriori algorithm, association rules, data mining, spatio-temporal data

Procedia PDF Downloads 343
1244 Different Sampling Schemes for Semi-Parametric Frailty Model

Authors: Nursel Koyuncu, Nihal Ata Tutkun

Abstract:

Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.

Keywords: frailty model, ranked set sampling, efficiency, simple random sampling

Procedia PDF Downloads 183
1243 Input and Interaction as Training for Cognitive Learning: Variation Sets Influence the Sudden Acquisition of Periphrastic estar 'to be' + verb + -ndo*

Authors: Mary Rosa Espinosa-Ochoa

Abstract:

Some constructions appear suddenly in children’s speech and are productive from the beginning. These constructions are supported by others, previously acquired, with which they share semantic and pragmatic features. Thus, for example, the acquisition of the passive voice in German is supported by other constructions with which it shares the lexical verb sein (“to be”). This also occurs in Spanish, in the acquisition of the progressive aspectual periphrasis estar (“to be”) + verb root + -ndo (present participle), supported by locative constructions acquired earlier with the same verb. The periphrasis shares with the locative constructions not only the lexical verb estar, but also pragmatic relations. Both constructions can be used to answer the question ¿Dónde está? (“Where is he/she/it?”), whose answer could be either Está aquí (“He/she/it is here”) or Se está bañando (“He/she/it is taking a bath”).This study is a corpus-based analysis of two children (1;08-2;08) and the input directed to them: it proposes that the pragmatic and semantic support from previously-acquired constructions comes from the input, during interaction with others. This hypothesis is based on analysis of constructions with estar, whose use to express temporal change (which differentiates it from its counterpart ser [“to be”]), is given in variation sets, similar to those described by Küntay and Slobin (2002), that allow the child to perceive the change of place experienced by nouns that function as its grammatical subject. For example, at different points during a bath, the mother says: El jabón está aquí “The soap is here” (beginning of bath); five minutes later, the soap has moved, and the mother says el jabón está ahí “the soap is there”; the soap moves again later on and she says: el jabón está abajo de ti “the soap is under you”. “The soap” is the grammatical subject of all of these utterances. The Spanish verb + -ndo is a progressive phase aspect encoder of a dynamic state that generates a token. The verb + -ndo is also combined with verb estar to encode. It is proposed here that the phases experienced in interaction with the adult, in events related to the verb estar, allow a child to generate this dynamicity and token reading of the verb + -ndo. In this way, children begin to produce the periphrasis suddenly and productively, even though neither the periphrasis nor the verb + -ndo itself are frequent in adult speech.

Keywords: child language acquisition, input, variation sets, Spanish language

Procedia PDF Downloads 119
1242 Co-Integration and Error Correction Mechanism of Supply Response of Sugarcane in Pakistan (1980-2012)

Authors: Himayatullah Khan

Abstract:

This study estimates supply response function of sugarcane in Pakistan from 1980-81 to 2012-13. The study uses co-integration approach and error correction mechanism. Sugarcane production, area and price series were tested for unit root using Augmented Dickey Fuller (ADF). The study found that these series were stationary at their first differenced level. Using the Augmented Engle-Granger test and Cointegrating Regression Durbin-Watson (CRDW) test, the study found that “production and price” and “area and price” were co-integrated suggesting that the two sets of time series had long-run or equilibrium relationship. The results of the error correction models for the two sets of series showed that there was disequilibrium in the short run there may be disequilibrium. The Engle-Granger residual may be thought of as the equilibrium error which can be used to tie the short-run behavior of the dependent variable to its long-run value. The Granger-Causality test results showed that log of price granger caused both the long of production and log of area whereas, the log of production and log of area Granger caused each other.

Keywords: co-integration, error correction mechanism, Granger-causality, sugarcane, supply response

Procedia PDF Downloads 410
1241 Stability Assessment of Chamshir Dam Based on DEM, South West Zagros

Authors: Rezvan Khavari

Abstract:

The Zagros fold-thrust belt in SW Iran is a part of the Alpine-Himalayan system which consists of a variety of structures with different sizes or geometries. The study area is Chamshir Dam, which is located on the Zohreh River, 20 km southeast of Gachsaran City (southwest Iran). The satellite images are valuable means available to geologists for locating geological or geomorphological features expressing regional fault or fracture systems, therefore, the satellite images were used for structural analysis of the Chamshir dam area. As well, using the DEM and geological maps, 3D Models of the area have been constructed. Then, based on these models, all the acquired fracture traces data were integrated in Geographic Information System (GIS) environment by using Arc GIS software. Based on field investigation and DEM model, main structures in the area consist of Cham Shir syncline and two fault sets, the main thrust faults with NW-SE direction and small normal faults in NE-SW direction. There are three joint sets in the study area, both of them (J1 and J3) are the main large fractures around the Chamshir dam. These fractures indeed consist with the normal faults in NE-SW direction. The third joint set in NW-SE is normal to the others. In general, according to topography, geomorphology and structural geology evidences, Chamshir dam has a potential for sliding in some parts of Gachsaran formation.

Keywords: DEM, chamshir dam, zohreh river, satellite images

Procedia PDF Downloads 460
1240 Processing of Input Material as a Way to Improve the Efficiency of the Glass Production Process

Authors: Joanna Rybicka-Łada, Magda Kosmal, Anna Kuśnierz

Abstract:

One of the main problems of the glass industry is the still high consumption of energy needed to produce glass mass, as well as the increase in prices, fuels, and raw materials. Therefore, comprehensive actions are taken to improve the entire production process. The key element of these activities, starting from filling the set to receiving the finished product, is the melting process, whose task is, among others, dissolving the components of the set, removing bubbles from the resulting melt, and obtaining a chemically homogeneous glass melt. This solution avoids dust formation during filling and is available on the market. This process consumes over 90% of the total energy needed in the production process. The processes occurring in the set during its conversion have a significant impact on the further stages and speed of the melting process and, thus, on its overall effectiveness. The speed of the reactions occurring and their course depend on the chemical nature of the raw materials, the degree of their fragmentation, thermal treatment as well as the form of the introduced set. An opportunity to minimize segregation and accelerate the conversion of glass sets may be the development of new technologies for preparing and dosing sets. The previously preferred traditional method of melting the set, based on mixing all glass raw materials together in loose form, can be replaced with a set in a thickened form. The aim of the project was to develop a glass set in a selectively or completely densified form and to examine the influence of set processing on the melting process and the properties of the glass.

Keywords: glass, melting process, glass set, raw materials

Procedia PDF Downloads 33
1239 Undrained Bearing Capacity of Circular Foundations on two Layered Clays

Authors: S. Benmebarek, S. Benmoussa, N. Benmebarek

Abstract:

Natural soils are often deposited in layers. The estimation of the bearing capacity of the soil using conventional bearing capacity theory based on the properties of the upper layer introduces significant inaccuracies if the thickness of the top layer is comparable to the width of the foundation placed on the soil surface. In this paper, numerical computations using the FLAC code are reported to evaluate the two clay layers effect on the bearing capacity beneath rigid circular rough footing subject to axial static load. The computation results of the parametric study are used to illustrate the sensibility of the bearing capacity, the shape factor and the failure mechanisms to the layered strength and layered thickness.

Keywords: numerical modeling, circular footings, layered clays, bearing capacity, failure

Procedia PDF Downloads 461
1238 Kinetic Façade Design Using 3D Scanning to Convert Physical Models into Digital Models

Authors: Do-Jin Jang, Sung-Ah Kim

Abstract:

In designing a kinetic façade, it is hard for the designer to make digital models due to its complex geometry with motion. This paper aims to present a methodology of converting a point cloud of a physical model into a single digital model with a certain topology and motion. The method uses a Microsoft Kinect sensor, and color markers were defined and applied to three paper folding-inspired designs. Although the resulted digital model cannot represent the whole folding range of the physical model, the method supports the designer to conduct a performance-oriented design process with the rough physical model in the reduced folding range.

Keywords: design media, kinetic facades, tangible user interface, 3D scanning

Procedia PDF Downloads 385
1237 The Effect of Strength Training and Consumption of Glutamine Supplement on GH/IGF1 Axis

Authors: Alireza Barari

Abstract:

Physical activity and diet are factors that influence the body's structure. The purpose of this study was to compare the effects of four weeks of resistance training, and glutamine supplement consumption on growth hormone (GH), and Insulin-like growth factor 1 (IGF-1) Axis. 40 amateur male bodybuilders, participated in this study. They were randomly divided into four equal groups, Resistance (R), Glutamine (G), Resistance with Glutamine (RG), and Control (C). The R group was assigned to a four week resistance training program, three times/week, three sets of 10 exercises with 6-10 repetitions, at the 80-95% 1RM (One Repetition Maximum), with 120 seconds rest between sets), G group is consuming l-glutamine (0.1 g/kg-1/day-1), RG group resistance training with consuming L-glutamine, and C group continued their normal lifestyle without exercise training. GH, IGF1, IGFBP-III plasma levels were measured before and after the protocol. One-way ANOVA indicated significant change in GH, IGF, and IGFBP-III between the four groups, and the Tukey test demonstrated significant increase in GH, IGF1, IGFBP-III plasma levels in R, and RG group. Based upon these findings, we concluded that resistance training at 80-95% 1RM intensity, and resistance training along with oral glutamine shows significantly increase secretion of GH, IGF-1, and IGFBP-III in amateur males, but the addition of oral glutamine to the exercise program did not show significant difference in GH, IGF-1, and IGFBP-III.

Keywords: strength, glutamine, growth hormone, insulin-like growth factor 1

Procedia PDF Downloads 279
1236 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 156
1235 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: concrete bridges, deterioration, Markov chains, probability matrix

Procedia PDF Downloads 316
1234 Determining Optimal Number of Trees in Random Forests

Authors: Songul Cinaroglu

Abstract:

Background: Random Forest is an efficient, multi-class machine learning method using for classification, regression and other tasks. This method is operating by constructing each tree using different bootstrap sample of the data. Determining the number of trees in random forests is an open question in the literature for studies about improving classification performance of random forests. Aim: The aim of this study is to analyze whether there is an optimal number of trees in Random Forests and how performance of Random Forests differ according to increase in number of trees using sample health data sets in R programme. Method: In this study we analyzed the performance of Random Forests as the number of trees grows and doubling the number of trees at every iteration using “random forest” package in R programme. For determining minimum and optimal number of trees we performed Mc Nemar test and Area Under ROC Curve respectively. Results: At the end of the analysis it was found that as the number of trees grows, it does not always means that the performance of the forest is better than forests which have fever trees. In other words larger number of trees only increases computational costs but not increases performance results. Conclusion: Despite general practice in using random forests is to generate large number of trees for having high performance results, this study shows that increasing number of trees doesn’t always improves performance. Future studies can compare different kinds of data sets and different performance measures to test whether Random Forest performance results change as number of trees increase or not.

Keywords: classification methods, decision trees, number of trees, random forest

Procedia PDF Downloads 372
1233 Learn through AR (Augmented Reality)

Authors: Prajakta Musale, Bhargav Parlikar, Sakshi Parkhi, Anshu Parihar, Aryan Parikh, Diksha Parasharam, Parth Jadhav

Abstract:

AR technology is basically a development of VR technology that harnesses the power of computers to be able to read the surroundings and create projections of digital models in the real world for the purpose of visualization, demonstration, and education. It has been applied to education, fields of prototyping in product design, development of medical models, battle strategy in the military and many other fields. Our Engineering Design and Innovation (EDAI) project focuses on the usage of augmented reality, visual mapping, and 3d-visualization along with animation and text boxes to help students in fields of education get a rough idea of the concepts such as flow and mechanical movements that may be hard to visualize at first glance.

Keywords: spatial mapping, ARKit, depth sensing, real-time rendering

Procedia PDF Downloads 34
1232 Chronic and Sub-Acute Lumbosacral Radiculopathies Behave Differently to Repeated Back Extension Exercises

Authors: Sami Alabdulwahab

Abstract:

Background: Repeated back extension exercises (RBEEs) are among the management options for symptoms associated with lumbosacral radiculopathy (LSR). RBEEs have been reported to cause changes in the distribution and intensity of radicular symptoms caused by possible compression/decompression of the compromised nerve root. Purpose: The purpose of this study was to investigate the effects of the RBEEs on the neurophysiology of the compromised nerve root and on standing mobility and pain intensity in patients with sub-acute and chronic LSR. Methods: A total of 40 patients with unilateral sub-acute/chronic lumbosacral radiculopathy voluntarily participated in the study; the patients performed 3 sets of 10 RBEEs in the prone position with 1 min of rest between the sets. The soleus H-reflex, standing mobility and pain intensity were recorded before and after the RBEEs. Results: The results of the study showed that the RBEEs significantly improved the H-reflex, standing mobility and pain intensity in patients with sub-acute LSR (p<0.01); there was not a significant improvement in the patients with chronic LSR (p<0.61). Conclusion: RBEEs in prone position is recommended for improving the neurophysiological function of the compromised nerve root and standing mobility in patients with sub-acute LSR. Implication: Sub-acute and chronic LSR responded differently to RBEEs. Sub-acute LSR appear to have flexible and movable disc structures, which could be managed with RBEEs.

Keywords: h-reflex, back extension, lumbosacral radiculopathy, pain

Procedia PDF Downloads 451
1231 Group Consensus of Hesitant Fuzzy Linguistic Variables for Decision-Making Problem

Authors: Chen T. Chen, Hui L. Cheng

Abstract:

Due to the different knowledge, experience and expertise of experts, they usually provide the different opinions in the group decision-making process. Therefore, it is an important issue to reach the group consensus of opinions of experts in group multiple-criteria decision-making (GMCDM) process. Because the subjective opinions of experts always are fuzziness and uncertainties, it is difficult to use crisp values to describe the real opinions of experts or decision-makers. It is reasonable for experts to use the linguistic variables to express their opinions. The hesitant fuzzy set are extended from the concept of fuzzy sets. Experts use the hesitant fuzzy sets can be flexible to describe their subjective opinions. In order to aggregate the hesitant fuzzy linguistic variables of all experts effectively, an adjustment method based on distance function will be presented in this paper. Based on the opinions adjustment method, this paper will present an effective approach to adjust the hesitant fuzzy linguistic variables of all experts to reach the group consensus. Then, a new hesitant linguistic GMCDM method will be presented based on the group consensus of hesitant fuzzy linguistic variables. Finally, an example will be implemented to illustrate the computational process to enhance the practical value of the proposed model.

Keywords: group multi-criteria decision-making, linguistic variables, hesitant fuzzy linguistic variables, distance function, group consensus

Procedia PDF Downloads 124
1230 Pulsed Laser Single Event Transients in 0.18 μM Partially-Depleted Silicon-On-Insulator Device

Authors: MeiBo, ZhaoXing, LuoLei, YuQingkui, TangMin, HanZhengsheng

Abstract:

The Single Event Transients (SETs) were investigated on 0.18μm PDSOI transistors and 100 series CMOS inverter chain using pulse laser. The effect of different laser energy and device bias for waveform on SET was characterized experimentally, as well as the generation and propagation of SET in inverter chain. In this paper, the effects of struck transistors type and struck locations on SETs were investigated. The results showed that when irradiate NMOSFETs from 100th to 2nd stages, the SET pulse width measured at the output terminal increased from 287.4 ps to 472.9 ps; and when irradiate PMOSFETs from 99th to 1st stages, the SET pulse width increased from 287.4 ps to 472.9 ps. When struck locations were close to the output of the chain, the SET pulse was narrow; however, when struck nodes were close to the input, the SET pulse was broadening. SET pulses were progressively broadened up when propagating along inverter chains. The SET pulse broadening is independent of the type of struck transistors. Through analysis, history effect induced threshold voltage hysteresis in PDSOI is the reason of pulse broadening. The positive pulse observed by oscilloscope, contrary to the expected results, is because of charging and discharging of capacitor.

Keywords: single event transients, pulse laser, partially-depleted silicon-on-insulator, propagation-induced pulse broadening effect

Procedia PDF Downloads 384