Search results for: classification size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7593

Search results for: classification size

6723 The Influence of Step and Fillet Shape on Nozzle Endwall Heat Transfer

Authors: Jeong Ju Kim, Hee Yoon Chung, Dong Ho Rhee, Hyung Hee Cho

Abstract:

There is a gap at combustor-turbine interface where leakage flow comes out to prevent hot gas ingestion into the gas turbine nozzle platform. The leakage flow protects the nozzle endwall surface from the hot gas coming from combustor exit. For controlling flow’s stream, the gap’s geometry is transformed by changing fillet radius size. During the operation, step configuration is occurred that was unintended between combustor-turbine platform interface caused by thermal expansion or mismatched assembly. In this study, CFD simulations were performed to investigate the effect of the fillet and step on heat transfer and film cooling effectiveness on the nozzle platform. The Reynolds-averaged Navier-stokes equation was solved with turbulence model, SST k-omega. With the fillet configuration, predicted film cooling effectiveness results indicated that fillet radius size influences to enhance film cooling effectiveness. Predicted film cooling effectiveness results at forward facing step configuration indicated that step height influences to enhance film cooling effectiveness. We suggested that designer change a combustor-turbine interface configuration which was varied by fillet radius size near endwall gap when there was a step at combustor-turbine interface. Gap shape was modified by increasing fillet radius size near nozzle endwall. Also, fillet radius and step height were interacted with the film cooling effectiveness and heat transfer on endwall surface.

Keywords: gas turbine, film cooling effectiveness, endwall, fillet

Procedia PDF Downloads 350
6722 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Authors: Ameur Abdelkader, Abed Bouarfa Hafida

Abstract:

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm

Procedia PDF Downloads 128
6721 Classification of Factors Influencing Buyer-Supplier Relationship: A Case Study from the Cement Industry

Authors: Alberto Piatto, Zaza Nadja Lee Hansen, Peter Jacobsen

Abstract:

This paper examines the quantitative and qualitative factors influencing the buyer-supplier relationship. Understanding and acting on the right factors influencing supplier relationship management is crucial when a company outsource an important part of its business as it can be for engineering to order (ETO) company executing only the designing part in-house. Acting on these factors increase the quality of the relationship obtaining for both parties what they want and expect from an improved relationship. Best practices in supplier relationship management are considered and a case study of a large global company, called Cement A/S, operating in the cement business is carried out. One study is conducted including a large international company and hundreds of its suppliers. Data from the company is collected using semi-structured interviews and data from the suppliers is collected using a survey. Based on these inputs and an extensive literature review a classification of factors influencing the relationship buyer-supplier is presented and discussed. The results show that different managers among the company are assessing supplier from various perspectives, a standard approach to measure the performance of suppliers does not exist. The factors used nowadays in the company to measure performances of the suppliers are mostly related to time and cost. Quality is a key factor, but it has not been addressed properly since no data are available in the system. From a practical perspective, managers can learn from this paper which factors to consider when applying best practices of Supplier Relationship Management. Furthermore, from a theoretical perspective, this paper contributes with new knowledge in the area as limited research in collaboration with the company has been conducted. For this reason, a company, its suppliers and few studies for this type of industry have been conducted. For further research, it is suggested to define the correlation of factors to the profitability of the company and calculate its impact. When conducting this analysis it is important to focus on the efficient and effective use of factors that can be measurable and accepted from the supplier.

Keywords: buyer-supplier relationship, cement industry, classification of factors, ETO

Procedia PDF Downloads 260
6720 Particle Size Distribution Estimation of a Mixture of Regular and Irregular Sized Particles Using Acoustic Emissions

Authors: Ejay Nsugbe, Andrew Starr, Ian Jennions, Cristobal Ruiz-Carcel

Abstract:

This works investigates the possibility of using Acoustic Emissions (AE) to estimate the Particle Size Distribution (PSD) of a mixture of particles that comprise of particles of different densities and geometry. The experiments carried out involved the mixture of a set of glass and polyethylene particles that ranged from 150-212 microns and 150-250 microns respectively and an experimental rig that allowed the free fall of a continuous stream of particles on a target plate which the AE sensor was placed. By using a time domain based multiple threshold method, it was observed that the PSD of the particles in the mixture could be estimated.

Keywords: acoustic emissions, particle sizing, process monitoring, signal processing

Procedia PDF Downloads 337
6719 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method

Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson

Abstract:

Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.

Keywords: adversarial examples, attack, computer vision, image processing

Procedia PDF Downloads 174
6718 Managing of Work Risk in Small and Medium-Size Companies

Authors: Janusz K. Grabara, Bartłomiej Okwiet, Sebastian Kot

Abstract:

The purpose of the article is presentation and analysis of the aspect of job security in small and medium-size enterprises in Poland with reference to other EU countries. We show the theoretical aspects of the risk with reference to managing small and medium enterprises, next risk management in small and medium enterprises in Poland, which were subjected to a detailed analysis. We show in detail the risk associated with the operation of the mentioned above companies, as well as analyses its levels on various stages and for different kinds of conducted activity.

Keywords: job safety, SME, work risk, risk management

Procedia PDF Downloads 480
6717 Impact of Stack Caches: Locality Awareness and Cost Effectiveness

Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang

Abstract:

Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.

Keywords: hit rate, locality of program, stack cache, stack data

Procedia PDF Downloads 289
6716 Comparison Of Data Mining Models To Predict Future Bridge Conditions

Authors: Pablo Martinez, Emad Mohamed, Osama Mohsen, Yasser Mohamed

Abstract:

Highway and bridge agencies, such as the Ministry of Transportation in Ontario, use the Bridge Condition Index (BCI) which is defined as the weighted condition of all bridge elements to determine the rehabilitation priorities for its bridges. Therefore, accurate forecasting of BCI is essential for bridge rehabilitation budgeting planning. The large amount of data available in regard to bridge conditions for several years dictate utilizing traditional mathematical models as infeasible analysis methods. This research study focuses on investigating different classification models that are developed to predict the bridge condition index in the province of Ontario, Canada based on the publicly available data for 2800 bridges over a period of more than 10 years. The data preparation is a key factor to develop acceptable classification models even with the simplest one, the k-NN model. All the models were tested, compared and statistically validated via cross validation and t-test. A simple k-NN model showed reasonable results (within 0.5% relative error) when predicting the bridge condition in an incoming year.

Keywords: asset management, bridge condition index, data mining, forecasting, infrastructure, knowledge discovery in databases, maintenance, predictive models

Procedia PDF Downloads 175
6715 Classifications of Images for the Recognition of People’s Behaviors by SIFT and SVM

Authors: Henni Sid Ahmed, Belbachir Mohamed Faouzi, Jean Caelen

Abstract:

Behavior recognition has been studied for realizing drivers assisting system and automated navigation and is an important studied field in the intelligent Building. In this paper, a recognition method of behavior recognition separated from a real image was studied. Images were divided into several categories according to the actual weather, distance and angle of view etc. SIFT was firstly used to detect key points and describe them because the SIFT (Scale Invariant Feature Transform) features were invariant to image scale and rotation and were robust to changes in the viewpoint and illumination. My goal is to develop a robust and reliable system which is composed of two fixed cameras in every room of intelligent building which are connected to a computer for acquisition of video sequences, with a program using these video sequences as inputs, we use SIFT represented different images of video sequences, and SVM (support vector machine) Lights as a programming tool for classification of images in order to classify people’s behaviors in the intelligent building in order to give maximum comfort with optimized energy consumption.

Keywords: video analysis, people behavior, intelligent building, classification

Procedia PDF Downloads 364
6714 Evaluation of Role of Surgery in Management of Pediatric Germ Cell Tumors According to Risk Adapted Therapy Protocols

Authors: Ahmed Abdallatif

Abstract:

Background: Patients with malignant germ cell tumors have age distribution in two peaks, with the first one during infancy and the second after the onset of puberty. Gonadal germ cell tumors are the most common malignant ovarian tumor in females aged below twenty years. Sacrococcygeal and retroperitoneal abdominal tumors usually presents in a large size before the onset of symptoms. Methods: Patients with pediatric germ cell tumors presenting to Children’s Cancer Hospital Egypt and National Cancer Institute Egypt from January 2008 to June 2011 Patients underwent stratification according to risk into low, intermediate and high risk groups according to children oncology group classification. Objectives: Assessment of the clinicopathologic features of all cases of pediatric germ cell tumors and classification of malignant cases according to their stage, and the primary site to low, intermediate and high risk patients. Evaluation of surgical management in each group of patients focusing on surgical approach, the extent of surgical resection according to each site, ability to achieve complete surgical resection and perioperative complications. Finally, determination of the three years overall and disease-free survival in different groups and the relation to different prognostic factors including the extent of surgical resection. Results: Out of 131 cases surgically explored only 26 cases had re exploration with 8 cases explored for residual disease 9 cases for remote recurrence or metastatic disease and the other 9 cases for other complications. Patients with low risk kept under follow up after surgery, out of those of low risk group (48 patients) only 8 patients (16.5%) shifted to intermediate risk. There were 20 patients (14.6%) diagnosed as intermediate risk received 3 cycles of compressed (Cisplatin, Etoposide and Bleomycin) and all high risk group patients 69patients (50.4%) received chemotherapy. Stage of disease was strongly and significantly related to overall survival with a poorer survival in late stages (stage IV) as compared to earlier stages. Conclusion: Overall survival rate at 3 three years was (76.7% ± 5.4, 3) years EFS was (77.8 % ±4.0), however 3 years DFS was much better (89.8 ± 3.4) in whole study group with ovarian tumors had significantly higher Overall survival (90% ± 5.1). Event Free Survival analysis showed that Male gender was 3 times likely to have bad events than females. Patients who underwent incomplete resection were 4 times more than patients with complete resection to have bad events. Disease free survival analysis showed that Patients who underwent incomplete surgery were 18.8 times liable for recurrence compared to those who underwent complete surgery, and patients who were exposed to re-excision were 21 times more prone to recurrence compared to other patients.

Keywords: extragonadal, germ cell tumors, gonadal, pediatric

Procedia PDF Downloads 202
6713 Brain Computer Interface Implementation for Affective Computing Sensing: Classifiers Comparison

Authors: Ramón Aparicio-García, Gustavo Juárez Gracia, Jesús Álvarez Cedillo

Abstract:

A research line of the computer science that involve the study of the Human-Computer Interaction (HCI), which search to recognize and interpret the user intent by the storage and the subsequent analysis of the electrical signals of the brain, for using them in the control of electronic devices. On the other hand, the affective computing research applies the human emotions in the HCI process helping to reduce the user frustration. This paper shows the results obtained during the hardware and software development of a Brain Computer Interface (BCI) capable of recognizing the human emotions through the association of the brain electrical activity patterns. The hardware involves the sensing stage and analogical-digital conversion. The interface software involves algorithms for pre-processing of the signal in time and frequency analysis and the classification of patterns associated with the electrical brain activity. The methods used for the analysis and classification of the signal have been tested separately, by using a database that is accessible to the public, besides to a comparison among classifiers in order to know the best performing.

Keywords: affective computing, interface, brain, intelligent interaction

Procedia PDF Downloads 369
6712 The Reasons for Vegetarianism in Estonia and its Effects to Body Composition

Authors: Ülle Parm, Kata Pedamäe, Jaak Jürimäe, Evelin Lätt, Aivar Orav, Anna-Liisa Tamm

Abstract:

Vegetarianism has gained popularity across the world. It`s being chosen for multiple reasons, but among Estonians, these have remained unknown. Previously, attention to bone health and probable nutrient deficiency of vegetarians has been paid and in vegetarians lower body mass index (BMI) and blood cholesterol level has been found but the results are inconclusive. The goal was to explain reasons for choosing vegetarian diet in Estonia and impact of vegetarianism to body composition – BMI, fat percentage (fat%), fat mass (FM), and fat free mass (FFM). The study group comprised of 68 vegetarians and 103 omnivorous. The determining body composition with DXA (Hologic) was concluded in 2013. Body mass (medical electronic scale, A&D Instruments, Abingdon, UK) and height (Martin metal anthropometer to the nearest 0.1 cm) were measured and BMI calculated (kg/m2). General data (physical activity level included) was collected with questionnaires. The main reasons why vegetarianism was chosen were the healthiness of the vegetarian diet (59%) and the wish to fight for animal rights (72%) Food additives were consumed by less than half of vegetarians, more often by men. Vegetarians had lower BMI than omnivores, especially amongst men. Based on BMI classification, vegetarians were less obese than omnivores. However, there were no differences in the FM, FFM and fat percentage figures of the two groups. Higher BMI might be the cause of higher physical activity level among omnivores compared with vegetarians. For classifying people as underweight, normal weight, overweight and obese both BMI and fat% criteria were used. By BMI classification in comparison with fat%, more people in the normal weight group were considered; by using fat% in comparison with BMI classification, however, more people categorized as overweight. It can be concluded that the main reasons for vegetarianism chosen in Estonia are healthiness of the vegetarian diet and the wish to fight for animal rights and vegetarian diet has no effect on body fat percentage, FM and FFM.

Keywords: body composition, body fat percentage, body mass index, vegetarianism

Procedia PDF Downloads 401
6711 An Artificial Neural Network Model Based Study of Seismic Wave

Authors: Hemant Kumar, Nilendu Das

Abstract:

A study based on ANN structure gives us the information to predict the size of the future in realizing a past event. ANN, IMD (Indian meteorological department) data and remote sensing were used to enable a number of parameters for calculating the size that may occur in the future. A threshold selected specifically above the high-frequency harvest reached the area during the selected seismic activity. In the field of human and local biodiversity it remains to obtain the right parameter compared to the frequency of impact. But during the study the assumption is that predicting seismic activity is a difficult process, not because of the parameters involved here, which can be analyzed and funded in research activity.

Keywords: ANN, Bayesion class, earthquakes, IMD

Procedia PDF Downloads 111
6710 Language Shapes Thought: An Experimental Study on English and Mandarin Native Speakers' Sequencing of Size

Authors: Hsi Wei

Abstract:

Does the language we speak affect the way we think? This question has been discussed for a long time from different aspects. In this article, the issue is examined with an experiment on how speakers of different languages tend to do different sequencing when it comes to the size of general objects. An essential difference between the usage of English and Mandarin is the way we sequence the size of places or objects. In English, when describing the location of something we may say, for example, ‘The pen is inside the trashcan next to the tree at the park.’ In Mandarin, however, we would say, ‘The pen is at the park next to the tree inside the trashcan.’ It’s clear that generally English use the sequence of small to big while Mandarin the opposite. Therefore, the experiment was conducted to test if the difference of the languages affects the speakers’ ability to do the different sequencing. There were two groups of subjects; one consisted of English native speakers, another of Mandarin native speakers. Within the experiment, three nouns were showed as a group to the subjects as their native languages. Before they saw the nouns, they would first get an instruction of ‘big to small’, ‘small to big’, or ‘repeat’. Therefore, the subjects had to sequence the following group of nouns as the instruction they get or simply repeat the nouns. After completing every sequencing and repetition in their minds, they pushed a button as reaction. The repetition design was to gather the mere reading time of the person. As the result of the experiment showed, English native speakers reacted more quickly to the sequencing of ‘small to big’; on the other hand, Mandarin native speakers reacted more quickly to the sequence ‘big to small’. To conclude, this study may be of importance as a support for linguistic relativism that the language we speak do shape the way we think.

Keywords: language, linguistic relativism, size, sequencing

Procedia PDF Downloads 267
6709 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 102
6708 Green Space and Their Possibilities of Enhancing Urban Life in Dhaka City, Bangladesh

Authors: Ummeh Saika, Toshio Kikuchi

Abstract:

Population growth and urbanization is a global phenomenon. As the rapid progress of technology, many cities in the international community are facing serious problems of urbanization. There is no doubt that the urbanization will proceed to have significant impact on the ecology, economy and society at local, regional, and global levels. The inhabitants of Dhaka city suffer from lack of proper urban facilities. The green spaces are needed for different functional and leisure activities of the urban dwellers. Again growing densification, a number of green space are transferred into open space in the Dhaka city. As a result greenery of the city's decreases gradually. Moreover, the existing green space is frequently threatened by encroachment. The role of green space, both at community and city level, is important to improve the natural environment and social ties for future generations. Therefore, it seems that the green space needs to be more effective for public interaction. The main objective of this study is to address the effectiveness of urban green space (Urban Park) of Dhaka City. Two approaches are selected to fulfill the study. Firstly, analyze the long-term spatial changes of urban green space using GIS and secondly, investigate the relationship of urban park network with physical and social environment. The case study site covers eight urban parks of Dhaka metropolitan area of Bangladesh. Two aspects (Physical and Social) are applied for this study. For physical aspect, satellite images and aerial photos of different years are used to find out the changes of urban parks. And for social aspect, methods are used as questionnaire survey, interview, observation, photographs, sketch and previous information of parks to analyze about the social environment of parks. After calculation of all data by descriptive statistics, result is shown by maps using GIS. According to physical size, parks of Dhaka city are classified into four types: Small, Medium, Large and Extra Large parks. The observed result showed that the physical and social environment of urban parks varies with their size. In small size parks physical environment is moderate by newly tree plantation and area expansion. However, in medium size parks physical environment are poor, example- tree decrease, exposed soil increase. On the other hand, physical environment of large size and extra large size parks are in good condition, because of plenty of vegetation and well management. Again based on social environment, in small size parks people mainly come from surroundings area and mainly used as waiting place. In medium-size parks, people come to attend various occasion from different places. In large size and extra large size parks, people come from every part of the city area for tourism purpose. Urban parks are important source of green space. Its influence both physical and social environment of urban area. Nowadays green space area gradually decreases and transfer into open space. The consequence of this research reveals that changes of urban parks influence both physical and social environment and also impact on urban life.

Keywords: physical environment, social environment, urban life, urban parks

Procedia PDF Downloads 412
6707 A Method for Compression of Short Unicode Strings

Authors: Masoud Abedi, Abbas Malekpour, Peter Luksch, Mohammad Reza Mojtabaei

Abstract:

The use of short texts in communication has been greatly increasing in recent years. Applying different languages in short texts has led to compulsory use of Unicode strings. These strings need twice the space of common strings, hence, applying algorithms of compression for the purpose of accelerating transmission and reducing cost is worthwhile. Nevertheless, other compression methods like gzip, bzip2 or PAQ due to high overhead data size are not appropriate. The Huffman algorithm is one of the rare algorithms effective in reducing the size of short Unicode strings. In this paper, an algorithm is proposed for compression of very short Unicode strings. At first, every new character to be sent to a destination is inserted in the proposed mapping table. At the beginning, every character is new. In case the character is repeated for the same destination, it is not considered as a new character. Next, the new characters together with the mapping value of repeated characters are arranged through a specific technique and specially formatted to be transmitted. The results obtained from an assessment made on a set of short Persian and Arabic strings indicate that this proposed algorithm outperforms the Huffman algorithm in size reduction.

Keywords: Algorithms, Data Compression, Decoding, Encoding, Huffman Codes, Text Communication

Procedia PDF Downloads 334
6706 Semi-Supervised Learning Using Pseudo F Measure

Authors: Mahesh Balan U, Rohith Srinivaas Mohanakrishnan, Venkat Subramanian

Abstract:

Positive and unlabeled learning (PU) has gained more attention in both academic and industry research literature recently because of its relevance to existing business problems today. Yet, there still seems to be some existing challenges in terms of validating the performance of PU learning, as the actual truth of unlabeled data points is still unknown in contrast to a binary classification where we know the truth. In this study, we propose a novel PU learning technique based on the Pseudo-F measure, where we address this research gap. In this approach, we train the PU model to discriminate the probability distribution of the positive and unlabeled in the validation and spy data. The predicted probabilities of the PU model have a two-fold validation – (a) the predicted probabilities of reliable positives and predicted positives should be from the same distribution; (b) the predicted probabilities of predicted positives and predicted unlabeled should be from a different distribution. We experimented with this approach on a credit marketing case study in one of the world’s biggest fintech platforms and found evidence for benchmarking performance and backtested using historical data. This study contributes to the existing literature on semi-supervised learning.

Keywords: PU learning, semi-supervised learning, pseudo f measure, classification

Procedia PDF Downloads 220
6705 Effects of Type and Concentration Stabilizers on the Characteristics of Nutmeg Oil Nanoemulsions Prepared by High-Pressure Homogenization

Authors: Yuliani Aisyah, Sri Haryani, Novi Safriani

Abstract:

Nutmeg oil is one of the essential oils that have the ability as an antibacterial so it potentially uses to inhibit the growth of undesirable microbes in food. However, the essential oil that has low solubility in water, high volatile content, and strong aroma properties is difficult to apply in to foodstuffs. Therefore, the oil-in-water nanoemulsion system was used in this research. Gelatin, lecithin and tween 80 with 10%, 20%, 30% concentrations have been examined for the preparation of nutmeg oil nanoemulsions. The physicochemical properties and stability of nutmeg oil nanoemulsion were analyzed on viscosity, creaming index, emulsifying activity, droplet size, and polydispersity index. The results showed that the type and concentration stabilizer had a significant effect on viscosity, creaming index, droplet size and polydispersity index (P ≤ 0,01). The nanoemulsions stabilized with tween 80 had the best stability because the creaming index value was 0%, the emulsifying activity value was 100%, the droplet size was small (79 nm) and the polydispersity index was low (0.10) compared to the nanoemulsions stabilized with gelatin and lecithin. In brief, Tween 80 is strongly recommended to be used for stabilizing nutmeg oil nanoemulsions.

Keywords: nanoemulsion, nutmeg oil, stabilizer, stability

Procedia PDF Downloads 144
6704 Spontaneous Message Detection of Annoying Situation in Community Networks Using Mining Algorithm

Authors: P. Senthil Kumari

Abstract:

Main concerns in data mining investigation are social controls of data mining for handling ambiguity, noise, or incompleteness on text data. We describe an innovative approach for unplanned text data detection of community networks achieved by classification mechanism. In a tangible domain claim with humble secrecy backgrounds provided by community network for evading annoying content is presented on consumer message partition. To avoid this, mining methodology provides the capability to unswervingly switch the messages and similarly recover the superiority of ordering. Here we designated learning-centered mining approaches with pre-processing technique to complete this effort. Our involvement of work compact with rule-based personalization for automatic text categorization which was appropriate in many dissimilar frameworks and offers tolerance value for permits the background of comments conferring to a variety of conditions associated with the policy or rule arrangements processed by learning algorithm. Remarkably, we find that the choice of classifier has predicted the class labels for control of the inadequate documents on community network with great value of effect.

Keywords: text mining, data classification, community network, learning algorithm

Procedia PDF Downloads 487
6703 Monoallelic and Biallelic Deletions of 13q14 in a Group of 36 CLL Patients Investigated by CGH Haematological Cancer and SNP Array (8x60K)

Authors: B. Grygalewicz, R. Woroniecka, J. Rygier, K. Borkowska, A. Labak, B. Nowakowska, B. Pienkowska-Grela

Abstract:

Introduction: Chronic lymphocytic leukemia (CLL) is the most common form of adult leukemia in the Western world. Hemizygous and or homozygous loss at 13q14 occur in more than half of cases and constitute the most frequent chromosomal abnormality in CLL. It is believed that deletions 13q14 play a role in CLL pathogenesis. Two microRNA genes miR-15a and miR- 16-1 are targets of 13q14 deletions and plays a tumor suppressor role by targeting antiapoptotic BCL2 gene. Deletion size, as a single change detected in FISH analysis, has haprognostic significance. Patients with small deletions, without RB1 gene involvement, have the best prognosis and the longest overall survival time (OS 133 months). In patients with bigger deletion region, containing RB1 gene, prognosis drops to intermediate, like in patients with normal karyotype and without changes in FISH with overall survival 111 months. Aim: Precise delineation of 13q14 deletions regions in two groups of CLL patients, with mono- and biallelic deletions and qualifications of their prognostic significance. Methods: Detection of 13q14 deletions was performed by FISH analysis with CLL probe panel (D13S319, LAMP1, TP53, ATM, CEP-12). Accurate deletion size detection was performed by CGH Haematological Cancer and SNP array (8x60K). Results: Our investigated group of CLL patients with the 13q14 deletion, detected by FISH analysis, comprised two groups: 18 patients with monoallelic deletions and 18 patients with biallelic deletions. In FISH analysis, in the monoallelic group the range of cells with deletion, was 43% to 97%, while in biallelic group deletion was detected in 11% to 94% of cells. Microarray analysis revealed precise deletion regions. In the monoallelic group, the range of size was 348,12 Kb to 34,82 Mb, with median deletion size 7,93 Mb. In biallelic group discrepancy of total deletions, size was 135,27 Kb to 33,33 Mb, with median deletion size 2,52 Mb. The median size of smaller deletion regions on one copy chromosome 13 was 1,08 Mb while the average region of bigger deletion on the second chromosome 13 was 4,04 Mb. In the monoallelic group, in 8/18 deletion region covered RB1 gene. In the biallelic group, in 4/18 cases, revealed deletion on one copy of biallelic deletion and in 2/18 showed deletion of RB1 gene on both deleted 13q14 regions. All minimal deleted regions included miR-15a and miR-16-1 genes. Genetic results will be correlated with clinical data. Conclusions: Application of CGH microarrays technique in CLL allows accurately delineate the size of 13q14 deletion regions, what have a prognostic value. All deleted regions included miR15a and miR-16-1, what confirms the essential role of these genes in CLL pathogenesis. In our investigated groups of CLL patients with mono- and biallelic 13q14 deletions, patients with biallelic deletion presented smaller deletion sizes (2,52 Mb vs 7,93 Mb), what is connected with better prognosis.

Keywords: CLL, deletion 13q14, CGH microarrays, SNP array

Procedia PDF Downloads 244
6702 Classification of Random Doppler-Radar Targets during the Surveillance Operations

Authors: G. C. Tikkiwal, Mukesh Upadhyay

Abstract:

During the surveillance operations at war or peace time, the Radar operator gets a scatter of targets over the screen. This may be a tracked vehicle like tank vis-à-vis T72, BMP etc, or it may be a wheeled vehicle like ALS, TATRA, 2.5Tonne, Shaktiman or moving the army, moving convoys etc. The radar operator selects one of the promising targets into single target tracking (STT) mode. Once the target is locked, the operator gets a typical audible signal into his headphones. With reference to the gained experience and training over the time, the operator then identifies the random target. But this process is cumbersome and is solely dependent on the skills of the operator, thus may lead to misclassification of the object. In this paper, we present a technique using mathematical and statistical methods like fast fourier transformation (FFT) and principal component analysis (PCA) to identify the random objects. The process of classification is based on transforming the audible signature of target into music octave-notes. The whole methodology is then automated by developing suitable software. This automation increases the efficiency of identification of the random target by reducing the chances of misclassification. This whole study is based on live data.

Keywords: radar target, FFT, principal component analysis, eigenvector, octave-notes, DSP

Procedia PDF Downloads 380
6701 Development and Evaluation of Naringenin Nanosuspension to Improve Antioxidant Potential

Authors: Md. Shadab, Mariyam N. Nashid, Venkata Srikanth Meka, Thiagarajan Madheswaran

Abstract:

Naringenin (NAR), is a naturally occurring plant flavonoid, found predominantly in citrus fruits, that possesses a wide range of pharmacological properties including anti-oxidant, anti-inflammatory behaviour, cholesterol-lowering and anticarcinogenic activities. However, despite the therapeutic potential of naringenin shown in a number of animal models, its clinical development has been hindered due to its low aqueous solubility, slow dissolution rate and inefficient transport across biological membranes resulting in low bioavailability. Naringenin nanosuspension were produced using stabilizers Tween® 80 by high pressure homogenization techniques. The nanosuspensions were characterized with regard to size (photon correlation spectroscopy (PCS), size distribution, charge (zeta potential measurements), morphology, short term physical stability, dissolution profile and antioxidant potential. A nanocrystal PCS size of about 500 nm was obtained after 20 homogenization cycles at 1500 bar. The short-term stability was assessed by storage of the nanosuspensions at 4 ◦C, room temperature and 40 ◦C. Result showed that naringenin nanosuspension was physically unstable due to large fluctuations in the particle size and zeta potential after 30 days. Naringenin nanosuspension demonstrated higher drug dissolution (97.90%) compared to naringenin powder (62.76%) after 120 minutes of testing. Naringenin nanosuspension showed increased antioxidant activity compared to naringenin powder with a percentage DPPH radical scavenging activity of 49.17% and 31.45% respectively at the lowest DPPH concentration.

Keywords: bioavailability, naringenin, nanosuspension, oral delivery

Procedia PDF Downloads 315
6700 Prevalence of Lower Third Molar Impactions and Angulations Among Yemeni Population

Authors: Khawlah Al-Khalidi

Abstract:

Prevalence of lower third molar impactions and angulations among Yemeni population The purpose of this study was to look into the prevalence of lower third molars in a sample of patients from Ibb University Affiliated Hospital, as well as to study and categorise their position by using Pell and Gregory classification, and to look into a possible correlation between their position and the indication for extraction. Materials and methods: This is a retrospective, observational study in which a sample of 200 patients from Ibb University Affiliated Hospital were studied, including patient record validation and orthopantomography performed in screening appointments in people aged 16 to 21. Results and discussion: Males make up 63% of the sample, while people aged 19 to 20 make up 41.2%. Lower third molars were found in 365 of the 365 instances examined, accounting for 91% of the sample under study. According to Pell and Gregory's categorisation, the most common position is IIB, with 37%, followed by IIA with 21%; less common classes are IIIA, IC, and IIIC, with 1%, 3%, and 3%, respectively. It was feasible to determine that 56% of the lower third molars in the sample were recommended for extraction during the screening consultation. Finally, there are differences in third molar location and angulation. There was, however, a link between the available space for third molar eruption and the need for tooth extraction.

Keywords: lower third molar, extraction, Pell and Gregory classification, lower third molar impaction

Procedia PDF Downloads 33
6699 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of the present work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. Based on feature selection in different phases, in this research, we design a neural network system that has optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each ROI, 6 distinct set of texture features are extracted such as first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. We show that with the injection of liquid and the analysis of more phases the high relevant features in each region changed. Our results show that for detecting HCC tumor phase3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between these two classes according to our method, relates to first order histogram parameters with the accuracy of 85% in phase 1, 95% phase 2, and 95% in phase 3.

Keywords: multi-phasic liver images, texture analysis, neural network, hidden layer

Procedia PDF Downloads 248
6698 Comparative Analysis of Patent Protection between Health System and Enterprises in Shanghai, China

Authors: Na Li, Yunwei Zhang, Yuhong Niu

Abstract:

The study discussed the patent protections of health system and enterprises in Shanghai. The comparisons of technical distribution and scopes of patent protections between Shanghai health system and enterprises were used by the methods of IPC classification, co-words analysis and visual social network. Results reflected a decreasing order within IPC A61 area, namely A61B, A61K, A61M, and A61F. A61B required to be further investigated. The highest authorized patents A61B17 of A61B of IPC A61 area was found. Within A61B17, fracture fixation, ligament reconstruction, cardiac surgery, and biopsy detection were regarded as common concerned fields by Shanghai health system and enterprises. However, compared with cardiac closure which Shanghai enterprises paid attention to, Shanghai health system was more inclined to blockages and hemostatic tools. The results also revealed that the scopes of patent protections of Shanghai enterprises were relatively centralized. Shanghai enterprises had a series of comprehensive strategies for protecting core patents. In contrast, Shanghai health system was considered to be lack of strategic patent protections for core patents.

Keywords: co-words analysis, IPC classification, patent protection, technical distribution

Procedia PDF Downloads 118
6697 Small Farm Diversification Opportunities in Viticulture-Winemaking Sector of Georgia

Authors: E. Kharaishvili

Abstract:

The paper analyses the role of small farms in socio-economic development of agriculture in Georgia and evaluates modern concepts regarding the development of the farms of this size. The scale of farms in Georgia is studied and the major problems are revealed. Opportunities and directions of diversification are discussed from the point of increasing the share of Georgian grapes and wine both on domestic and international markets. It’s shown that the size of vineyard areas is directly reflected on the grape and wine production potential. Accordingly, vineyard area and grape production dynamics is discussed. Comparative analysis of small farms in Georgia and Italy is made and the major differences are identified. Diversification is evaluated based on cost-benefit analysis on the one hand and on the other hand, from the point of promoting economic activities, protecting nature and rural area development. The paper provides proofs for the outcomes of diversification. The key hindering factors for the development of small farms are identified and corresponding conclusions are made, based on which recommendations for diversification of the farms of this size are developed.

Keywords: small farms, scale of farms, diversification, Georgia

Procedia PDF Downloads 374
6696 Effect of Cement Amount on California Bearing Ratio Values of Different Soil

Authors: Ayse Pekrioglu Balkis, Sawash Mecid

Abstract:

Due to continued growth and rapid development of road construction in worldwide, road sub-layers consist of soil layers, therefore, identification and recognition of type of soil and soil behavior in different condition help to us to select soil according to specification and engineering characteristic, also if necessary sometimes stabilize the soil and treat undesirable properties of soils by adding materials such as bitumen, lime, cement, etc. If the soil beneath the road is not done according to the standards and construction will need more construction time. In this case, a large part of soil should be removed, transported and sometimes deposited. Then purchased sand and gravel is transported to the site and full depth filled and compacted. Stabilization by cement or other treats gives an opportunity to use the existing soil as a base material instead of removing it and purchasing and transporting better fill materials. Classification of soil according to AASHTOO system and USCS help engineers to anticipate soil behavior and select best treatment method. In this study soil classification and the relation between soil classification and stabilization method is discussed, cement stabilization with different percentages have been selected for soil treatment based on NCHRP. There are different parameters to define the strength of soil. In this study, CBR will be used to define the strength of soil. Cement by percentages, 0%, 3%, 7% and 10% added to soil for evaluation effect of added cement to CBR of treated soil. Implementation of stabilization process by different cement content help engineers to select an economic cement amount for the stabilization process according to project specification and characteristics. Stabilization process in optimum moisture content (OMC) and mixing rate effect on the strength of soil in the laboratory and field construction operation have been performed to see the improvement rate in strength and plasticity. Cement stabilization is quicker than a universal method such as removing and changing field soils. Cement addition increases CBR values of different soil types by the range of 22-69%.

Keywords: California Bearing Ratio, cement stabilization, clayey soil, mechanical properties

Procedia PDF Downloads 380
6695 Engagement Analysis Using DAiSEE Dataset

Authors: Naman Solanki, Souraj Mondal

Abstract:

With the world moving towards online communication, the video datastore has exploded in the past few years. Consequently, it has become crucial to analyse participant’s engagement levels in online communication videos. Engagement prediction of people in videos can be useful in many domains, like education, client meetings, dating, etc. Video-level or frame-level prediction of engagement for a user involves the development of robust models that can capture facial micro-emotions efficiently. For the development of an engagement prediction model, it is necessary to have a widely-accepted standard dataset for engagement analysis. DAiSEE is one of the datasets which consist of in-the-wild data and has a gold standard annotation for engagement prediction. Earlier research done using the DAiSEE dataset involved training and testing standard models like CNN-based models, but the results were not satisfactory according to industry standards. In this paper, a multi-level classification approach has been introduced to create a more robust model for engagement analysis using the DAiSEE dataset. This approach has recorded testing accuracies of 0.638, 0.7728, 0.8195, and 0.866 for predicting boredom level, engagement level, confusion level, and frustration level, respectively.

Keywords: computer vision, engagement prediction, deep learning, multi-level classification

Procedia PDF Downloads 100
6694 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change

Authors: Ermias A. Tegegn, Million Meshesha

Abstract:

Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.

Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model

Procedia PDF Downloads 132