Search results for: automatic classification of tremor types
6773 The Status of Precision Agricultural Technology Adoption on Row Crop Farms vs. Specialty Crop Farms
Authors: Shirin Ghatrehsamani
Abstract:
Higher efficiency and lower environmental impact are the consequence of using advanced technology in farming. They also help to decrease yield variability by diminishing weather variability impact, optimizing nutrient and pest management as well as reducing competition from weeds. A better understanding of the pros and cons of applying technology and finding the main reason for preventing the utilization of the technology has a significant impact on developing technology adoption among farmers and producers in the digital agriculture era. The results from two surveys carried out in 2019 and 2021 were used to investigate whether the crop types had an impact on the willingness to utilize technology on the farms. The main focus of the questionnaire was on utilizing precision agriculture (PA) technologies among farmers in some parts of the united states. Collected data was analyzed to determine the practical application of various technologies. The survey results showed more similarities in the main reason not to use PA between the two crop types, but the present application of using technology in specialty crops is generally five times larger than in row crops. GPS receiver applications were reported similar for both types of crops. Lack of knowledge and high cost of data handling were cited as the main problems. The most significant difference was among using variable rate technology, which was 43% for specialty crops while was reported 0% for row crops. Pest scouting and mapping were commonly used for specialty crops, while they were rarely applied for row crops. Survey respondents found yield mapping, soil sampling map, and irrigation scheduling were more valuable for specialty crops than row crops in management decisions. About 50% of the respondents would like to share the PA data in both types of crops. Almost 50 % of respondents got their PA information from retailers in both categories, and as the second source, using extension agents were more common in specialty crops than row crops.Keywords: precision agriculture, smart farming, digital agriculture, technology adoption
Procedia PDF Downloads 1146772 Mechanical and Physical Properties of Various Types of Dental Floss
Authors: Supanitayanon Lalita, Dechkunakorn Surachai, Anuwongnukroh Niwat, Srikhirin Toemsak, Roongrujimek Pitchaya, Tua-Ngam Peerapong
Abstract:
Objective: To compare maximum load, percentage of elongation, physical characteristics of 4 types of dental floss: (1) Thai Silk Floss (silk, waxed), (2) Oral B® Essential Floss (nylon, waxed), (3) Experimental Floss Xu (nylon, unwaxed), (4) Experimental Floss Xw (nylon, waxed). Materials & method: Four types of floss were tested (n=30) with a Universal Testing Machine (Instron®). Each sample (30 cm long, 5 cm segment) was fixed, and pulled apart with load cell of 100 N and a test speed of 100 mm/min. Physical characteristics were investigated by digital microscope under 2.5×10 magnification, and scanning electron microscope under 1×100 and 5×100 magnification. The size of the filaments was measured in micron (μm) and the fineness were measured in Denier. Statistical analysis: For mechanical properties, the maximum load and the percentage of elongation were presented as mean ± SD. The distribution of the data was calculated by the Kolmogorov-Smirnov test. One-way ANOVA and multiple comparison (Tukey HSD) were used to analyze the differences among the groups with the level of a statistical difference at p < 0.05. Results: The maximum load of Floss Xu, Floss Xw, Oral B and Thai Silk were 47.39, 46.46, 25.38, and 23.70 N, respectively. The percentage of elongation of Oral B, Floss Xw, Floss Xu and Thai Silk were 72.43, 44.62, 31.25, and 16.44%, respectively. All 4 types of dental floss showed statistically differences in both the maximum load and percentage of elongation at p < 0.05, except for maximum load between Floss Xw and Floss Xu that showed no statistically significant difference. Physical characteristics of Thai silk revealed the most disintegrated, the smallest, and the least fine filaments. Conclusion: Floss Xu had the highest maximum load. Oral B had the highest percentage of elongation. Wax coating on Floss X increased the elongation but had no significant effect on the maximum load. The physical characteristics of Thai Silk resulted in the lowest mechanical properties values.Keywords: dental floss, maximum load, mechanical property, percentage of elongation, physical property
Procedia PDF Downloads 2786771 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Authors: Kyi Pyar Zaw, Zin Mar Kyu
Abstract:
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation
Procedia PDF Downloads 3206770 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 496769 The Effects of Land Use Types to Determine the Status of Sustainable River
Authors: Michael Louis Sunaris, Robby Yussac Tallar
Abstract:
The concept of sustainable river is evolving in Indonesia today. Many rivers condition in Indonesia have decreased by quality and quantity. The degradation of this condition is caused by rapid land use change as a result of increased population growth and human activity. It brings the degradation of the existing watersheds including some types of land use that an important factor in determining the status of river sustainability. Therefore, an evaluation method is required to determine the sustainability status of waterbody within watershed. The purpose of this study is to analyze various types of land use in determining the status of river sustainability. This study takes the watersheds of Citarum Upstream as a study area. The results of the analysis prove the index of sustainability status of the river that changes from good to bad or average in the rivers in the study area. The rapid and uncontrolled changes of land use especially in the upper watersheds area are the main causes that happened over time. It was indicated that the cumulative runoff coefficients were increased significantly. These situations indicated that the damage of watersheds has an impact on the water surplus or deficit problem yearly. Therefore, the rivers in Indonesia should be protected and conserved. The sustainability index of the rivers is an index to indicate the condition of watersheds by defining status of rivers in order to achieve sustainable water resource management.Keywords: land use change, runoff coefficient, a simple index, sustainable river
Procedia PDF Downloads 1576768 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning
Authors: Pooja Khanal, Huaming Zhang
Abstract:
Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.Keywords: bug classification, bug labels, GitHub issues, semantic differences
Procedia PDF Downloads 2016767 Body Types of Softball Players in the 39th National Games of Thailand
Authors: Nopadol Nimsuwan, Sumet Prom-in
Abstract:
The purpose of this study was to investigate the body types, size, and body compositions of softball players in the 39th National Games of Thailand. The population of this study was 352 softball players who participated in the 39th National Games of Thailand from which a sample size of 291 was determined using the Taro Yamane formula and selection is made with stratified sampling method. The data collected were weight, height, arm length, leg length, chest circumference, mid-upper arm circumference, calf circumference, subcutaneous fat in the upper arm area, the scapula bone area, above the pelvis area, and mid-calf area. Keys and Brozek formula was used to calculate the fat quantity, Kitagawa formula to calculate the muscle quantity, and Heath and Carter method was used to determine the values of body dimensions. The results of the study can be concluded as follows. The average body dimensions of the male softball players were the endo-mesomorph body type while the average body dimensions of female softball players were the meso-endomorph body type. When considered according to the softball positions, it was found that the male softball players in every position had the endo-mesomorph body type while the female softball players in every position had the meso-endomorph body type except for the center fielder that had the endo-ectomorph body type. The endo-mesomorph body type is suitable for male softball players, and the meso-endomorph body type is suitable for female softball players because these body types are suitable for the five basic softball skills which are: gripping, throwing, catching, hitting, and base running. Thus, people related to selecting softball players to play in sports competitions of different levels should consider factors in terms of body type, size, and body components of the players.Keywords: body types, softball players, national games of Thailand, social sustainability
Procedia PDF Downloads 4846766 DOG1 Expression Is in Common Human Tumors: A Tissue Microarray Study on More than 15,000 Tissue Samples
Authors: Kristina Jansen, Maximilian Lennartz, Patrick Lebok, Guido Sauter, Ronald Simon, David Dum, Stefan Steurer
Abstract:
DOG1 (Discovered on GIST1) is a voltage-gated calcium-activated chloride and bicarbonate channel that is highly expressed in interstitial cells of Cajal and in gastrointestinal stromal tumors (GIST) derived from Cajal cells. To systematically determine in what tumor entities and normal tissue types DOG1 may be further expressed, a tissue microarray (TMA) containing 15,965 samples from 121 different tumor types and subtypes as well as 608 samples of 76 different normal tissue types were analyzed by immunohistochemistry. DOG1 immunostaining was found in 67 tumor types, including GIST (95.7%), esophageal squamous cell carcinoma (31.9%), pancreatic ductal adenocarcinoma (33.6%), adenocarcinoma of the Papilla Vateri (20%), squamous cell carcinoma of the vulva (15.8%) and the oral cavity (15.3%), mucinous ovarian cancer (15.3%), esophageal adenocarcinoma (12.5%), endometrioid endometrial cancer (12.1%), neuroendocrine carcinoma of the colon (11.1%) and diffuse gastric adenocarcinoma (11%). Low level-DOG1 immunostaining was seen in 17 additional tumor entities. DOG1 expression was unrelated to histopathological parameters of tumor aggressiveness and/or patient prognosis in cancers of the breast (n=1,002), urinary bladder (975), ovary (469), endometrium (173), stomach (233), and thyroid gland (512). High DOG1 expression was linked to estrogen receptor expression in breast cancer (p<0.0001) and the absence of HPV infection in squamous cell carcinomas (p=0.0008). In conclusion, our data identify several tumor entities that can show DOG1 expression levels at similar levels as in GIST. Although DOG1 is tightly linked to a diagnosis of GIST in spindle cell tumors, the differential diagnosis is much broader in DOG1 positive epithelioid neoplasms.Keywords: biomarker, DOG1, immunohistochemistry, tissue microarray
Procedia PDF Downloads 2166765 Automated Detection of Related Software Changes by Probabilistic Neural Networks Model
Authors: Yuan Huang, Xiangping Chen, Xiaonan Luo
Abstract:
Current software are continuously updating. The change between two versions usually involves multiple program entities (e.g., packages, classes, methods, attributes) with multiple purposes (e.g., changed requirements, bug fixing). It is hard for developers to understand which changes are made for the same purpose. Whether two changes are related is not decided by the relationship between this two entities in the program. In this paper, we summarized 4 coupling rules(16 instances) and 4 state-combination types at the class, method and attribute levels for software change. Related Change Vector (RCV) are defined based on coupling rules and state-combination types, and applied to classify related software changes by using Probabilistic Neural Network during a software updating.Keywords: PNN, related change, state-combination, logical coupling, software entity
Procedia PDF Downloads 4376764 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3316763 Low-Impact Development Strategies Assessment for Urban Design
Abstract:
Climate change and land-use change caused by urban expansion increase the frequency of urban flooding. To mitigate the increase in runoff volume, low-impact development (LID) is a green approach for reducing the area of impervious surface and managing stormwater at the source with decentralized micro-scale control measures. However, the current benefit assessment and practical application of LID in Taiwan is still tending to be development plan in the community and building site scales. As for urban design, site-based moisture-holding capacity has been common index for evaluating LID’s effectiveness of urban design, which ignore the diversity, and complexity of the urban built environments, such as different densities, positive and negative spaces, volumes of building and so on. Such inflexible regulations not only probably make difficulty for most of the developed areas to implement, but also not suitable for every different types of built environments, make little benefits to some types of built environments. Looking toward to enable LID to strength the link with urban design to reduce the runoff in coping urban flooding, the research consider different characteristics of different types of built environments in developing LID strategy. Classify the built environments by doing the cluster analysis based on density measures, such as Ground Space Index (GSI), Floor Space Index (FSI), Floors (L), and Open Space Ratio (OSR), and analyze their impervious surface rates and runoff volumes. Simulate flood situations by using quasi-two-dimensional flood plain flow model, and evaluate the flood mitigation effectiveness of different types of built environments in different low-impact development strategies. The information from the results of the assessment can be more precisely implement in urban design. In addition, it helps to enact regulations of low-Impact development strategies in urban design more suitable for every different type of built environments.Keywords: low-impact development, urban design, flooding, density measures
Procedia PDF Downloads 3346762 Self-Organizing Maps for Credit Card Fraud Detection
Authors: ChunYi Peng, Wei Hsuan CHeng, Shyh Kuang Ueng
Abstract:
This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies
Procedia PDF Downloads 576761 Using Textual Pre-Processing and Text Mining to Create Semantic Links
Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo
Abstract:
This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.Keywords: semantic links, data mining, linked data, SKOS
Procedia PDF Downloads 1796760 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 1346759 Study of Metakaolin-Based Geopolymer with Addition of Polymer Admixtures
Authors: Olesia Mikhailova, Pavel Rovnaník
Abstract:
In the present work, metakaolin-based geopolymer including different polymer admixtures was studied. Different types of commercial polymer admixtures VINNAPAS® and polyethylene glycol of different relative molecular weight were used as polymer admixtures. The main objective of this work is to investigate the influence of different types of admixtures on the properties of metakaolin-based geopolymer mortars considering their different dosage. Mechanical properties, such as flexural and compressive strength were experimentally determined. Also, study of the microstructure of selected specimens by using a scanning electron microscope was performed. The results showed that the specimen with addition of 1.5% of VINNAPAS® 7016 F and 10% of polyethylene glycol 400 achieved maximum mechanical properties.Keywords: geopolymer, mechanical properties, metakaolin, microstructure, polymer admixtures, porosity
Procedia PDF Downloads 2366758 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.Keywords: virtual reality, effective computing, effective VR, emotion-based effective physiological database
Procedia PDF Downloads 2336757 Hacking's 'Between Goffman and Foucault': A Theoretical Frame for Criminology
Authors: Tomás Speziale
Abstract:
This paper aims to analyse how Ian Hacking states the theoretical basis of his research on the classification of people. Although all his early philosophical education had been based in Foucault, it is also true that Erving Goffman’s perspective provided him with epistemological and methodological tools for understanding face-to-face relationships. Hence, all his works must be thought of as social science texts that combine the research on how the individuals are constituted ‘top-down’ (as in Foucault), with the inquiry into how people renegotiate ‘bottom-up’ the classifications about them. Thus, Hacking´s proposal constitutes a middle ground between the French Philosopher and the American Sociologist. Placing himself between both authors allows Hacking to build a frame that is expected to adjust to Social Sciences’ main particularity: the fact that they study interactive kinds. These are kinds of people, which imply that those who are classified can change in certain ways that prompt the need for changing previous classifications themselves. It is all about the interaction between the labelling of people and the people who are classified. Consequently, understanding the way in which Hacking uses Foucault’s and Goffman’s theories is essential to fully comprehend the social dynamic between individuals and concepts, what Bert Hansen had called dialectical realism. His theoretical proposal, therefore, is not only valuable because it combines diverse perspectives, but also because it constitutes an utterly original and relevant framework for Sociological theory and particularly for Criminology.Keywords: classification of people, Foucault's archaeology, Goffman's interpersonal sociology, interactive kinds
Procedia PDF Downloads 3436756 Self-Organizing Maps for Credit Card Fraud Detection and Visualization
Authors: Peng Chun-Yi, Chen Wei-Hsuan, Ueng Shyh-Kuang
Abstract:
This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies
Procedia PDF Downloads 596755 Technologic Information about Photovoltaic Applied in Urban Residences
Authors: Stephanie Fabris Russo, Daiane Costa Guimarães, Jonas Pedro Fabris, Maria Emilia Camargo, Suzana Leitão Russo, José Augusto Andrade Filho
Abstract:
Among renewable energy sources, solar energy is the one that has stood out. Solar radiation can be used as a thermal energy source and can also be converted into electricity by means of effects on certain materials, such as thermoelectric and photovoltaic panels. These panels are often used to generate energy in homes, buildings, arenas, etc., and have low pollution emissions. Thus, a technological prospecting was performed to find patents related to the use of photovoltaic plates in urban residences. The patent search was based on ESPACENET, associating the keywords photovoltaic and home, where we found 136 patent documents in the period of 1994-2015 in the fields title and abstract. Note that the years 2009, 2010, 2011, 2012, 2013 and 2014 had the highest number of applicants, with respectively, 11, 13, 23, 29, 15 and 21. Regarding the country that deposited about this technology, it is clear that China leads with 67 patent deposits, followed by Japan with 38 patents applications. It is important to note that most depositors, 50% are companies, 44% are individual inventors and only 6% are universities. On the International Patent classification (IPC) codes, we noted that the most present classification in results was H02J3/38, which represents provisions in parallel to feed a single network by two or more generators, converters or transformers. Among all categories, there is the H session, which means Electricity, with 70% of the patents.Keywords: photovoltaic, urban residences, technology forecasting, prospecting
Procedia PDF Downloads 3006754 An Overbooking Model for Car Rental Service with Different Types of Cars
Authors: Naragain Phumchusri, Kittitach Pongpairoj
Abstract:
Overbooking is a very useful revenue management technique that could help reduce costs caused by either undersales or oversales. In this paper, we propose an overbooking model for two types of cars that can minimize the total cost for car rental service. With two types of cars, there is an upgrade possibility for lower type to upper type. This makes the model more complex than one type of cars scenario. We have found that convexity can be proved in this case. Sensitivity analysis of the parameters is conducted to observe the effects of relevant parameters on the optimal solution. Model simplification is proposed using multiple linear regression analysis, which can help estimate the optimal overbooking level using appropriate independent variables. The results show that the overbooking level from multiple linear regression model is relatively close to the optimal solution (with the adjusted R-squared value of at least 72.8%). To evaluate the performance of the proposed model, the total cost was compared with the case where the decision maker uses a naïve method for the overbooking level. It was found that the total cost from optimal solution is only 0.5 to 1 percent (on average) lower than the cost from regression model, while it is approximately 67% lower than the cost obtained by the naïve method. It indicates that our proposed simplification method using regression analysis can effectively perform in estimating the overbooking level.Keywords: overbooking, car rental industry, revenue management, stochastic model
Procedia PDF Downloads 1726753 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information
Authors: Haifeng Wang, Haili Zhang
Abstract:
Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.Keywords: computational social science, movie preference, machine learning, SVM
Procedia PDF Downloads 2606752 An Improved Parallel Algorithm of Decision Tree
Authors: Jiameng Wang, Yunfei Yin, Xiyu Deng
Abstract:
Parallel optimization is one of the important research topics of data mining at this stage. Taking Classification and Regression Tree (CART) parallelization as an example, this paper proposes a parallel data mining algorithm based on SSP-OGini-PCCP. Aiming at the problem of choosing the best CART segmentation point, this paper designs an S-SP model without data association; and in order to calculate the Gini index efficiently, a parallel OGini calculation method is designed. In addition, in order to improve the efficiency of the pruning algorithm, a synchronous PCCP pruning strategy is proposed in this paper. In this paper, the optimal segmentation calculation, Gini index calculation, and pruning algorithm are studied in depth. These are important components of parallel data mining. By constructing a distributed cluster simulation system based on SPARK, data mining methods based on SSP-OGini-PCCP are tested. Experimental results show that this method can increase the search efficiency of the best segmentation point by an average of 89%, increase the search efficiency of the Gini segmentation index by 3853%, and increase the pruning efficiency by 146% on average; and as the size of the data set increases, the performance of the algorithm remains stable, which meets the requirements of contemporary massive data processing.Keywords: classification, Gini index, parallel data mining, pruning ahead
Procedia PDF Downloads 1236751 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 2936750 The Selection of the Nearest Anchor Using Received Signal Strength Indication (RSSI)
Authors: Hichem Sassi, Tawfik Najeh, Noureddine Liouane
Abstract:
The localization information is crucial for the operation of WSN. There are principally two types of localization algorithms. The Range-based localization algorithm has strict requirements on hardware; thus, it is expensive to be implemented in practice. The Range-free localization algorithm reduces the hardware cost. However, it can only achieve high accuracy in ideal scenarios. In this paper, we locate unknown nodes by incorporating the advantages of these two types of methods. The proposed algorithm makes the unknown nodes select the nearest anchor using the Received Signal Strength Indicator (RSSI) and choose two other anchors which are the most accurate to achieve the estimated location. Our algorithm improves the localization accuracy compared with previous algorithms, which has been demonstrated by the simulating results.Keywords: WSN, localization, DV-Hop, RSSI
Procedia PDF Downloads 3606749 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis
Authors: Jui-Teng Liao
Abstract:
The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format
Procedia PDF Downloads 876748 Energy Planning Analysis of an Agritourism Complex Based on Energy Demand Simulation: A Case Study of Wuxi Yangshan Agritourism Complex
Authors: Li Zhu, Binghua Wang, Yong Sun
Abstract:
China is experiencing the rural development process, with the agritourism complex becoming one of the significant modes. Therefore, it is imperative to understand the energy performance of agritourism complex. This study focuses on a typical case of the agritourism complex and simulates the energy consumption performance on condition of the regular energy system. It was found that HVAC took 90% of the whole energy demand range. In order to optimize the energy supply structure, the hierarchical analysis was carried out on the level of architecture with three main factors such as construction situation, building types and energy demand types. Finally, the energy planning suggestion of the agritourism complex was put forward and the relevant results were obtained.Keywords: agritourism complex, energy planning, energy demand simulation, hierarchical structure model
Procedia PDF Downloads 1936747 Comparison of Volume of Fluid Model: Experimental and Empirical Results for Flows over Stacked Drop Manholes
Authors: Ramin Mansouri
Abstract:
The manhole is one of the types of structures that are installed at the site of change direction or change in the pipe diameter or sewage pipes as well as in step slope areas to reduce the flow velocity. In this study, the flow characteristics of hydraulic structures in a manhole structure have been investigated with a numerical model. In this research, the types of computational grid coarse, medium, and fines have been used for simulation. In order to simulate flow, k-ε model (standard, RNG, Realizable) and k-w model (standard SST) are used. Also, in order to find the best wall conditions, two types of standard and non-equilibrium wall functions were investigated. The turbulent model k-ε has the highest correlation with experimental results or all models. In terms of boundary conditions, constant speed is set for the flow input boundary, the output pressure is set in the boundaries which are in contact with the air, and the standard wall function is used for the effect of the wall function. In the numerical model, the depth at the output of the second manhole is estimated to be less than that of the laboratory and the output jet from the span. In the second regime, the jet flow collides with the manhole wall and divides into two parts, so hydraulic characteristics are the same as large vertical shaft hydraulic characteristics. In this situation, the turbulence is in a high range since it can be seen more energy loss in it. According to the results, energy loss in numerical is estimated at 9.359%, which is more than experimental data.Keywords: manhole, energy, depreciation, turbulence model, wall function, flow
Procedia PDF Downloads 826746 Variables for Measuring the Impact of the Social Enterprises in the Field of Community Development
Authors: A. Irudaya Veni Mary, M. Victor Louis Anthuvan, P. Christie, A. Indira
Abstract:
In India, social enterprises are working to create social value in various fields including education; health; women and child development; environment protection and community development. Although social enterprises have brought about tremendous changes in the lives of beneficiaries, the importance of their works is not understood thoroughly. One of the ways to prove themselves is to measure the impact, which in recent times has received much attention. This paper focuses on the study of social value created by the social enterprises in the field of community development. It also aims to put forth a research tool for measuring the social value created by the social enterprises in the field of community development. A close-ended interview schedule was prepared to measure the social value creation and it was administered among 60 beneficiaries of two social enterprises who work in the field of community development. The study results show that the social enterprises have brought four types of impact in the life of their beneficiaries; economic impact, social impact, political impact and cultural impact. This study is limited to the social enterprises those who work towards community development. This empirical finding will enable the reader to understand various types of social value created by the social enterprises working in the field of community development. This study will also serve as guide for social enterprises in community development activities to measure their impact and thereby improve their operation towards the betterment of the society. This paper is derived from an empirical research carried out to describe the different types of social value created by the social enterprises in India.Keywords: social enterprise, social entrepreneurs, social impact, social value, tool for social impact measurement
Procedia PDF Downloads 2736745 Comparative Life Cycle Analysis of Selected Modular Timber Construction and Assembly Typologies
Authors: Benjamin Goldsmith, Felix Heisel
Abstract:
The building industry must reduce its emissions in order to meet 2030 neutrality targets, and modular and/or offsite construction is seen as an alternative to conventional construction methods which could help achieve this goal. Modular construction has previously been shown to be less wasteful and has a lower global warming potential (GWP). While many studies have been conducted investigating the life cycle impacts of modular and conventional construction, few studies have compared different types of modular assembly and construction in order to determine which offer the greatest environmental benefits over their whole life cycle. This study seeks to investigate three different modular construction types -infill frame, core, and podium- in order to determine environmental impacts such as GWP as well as circularity indicators. The study will focus on the emissions of the production, construction, and end-of-life phases. The circularity of the various approaches will be taken into consideration in order to acknowledge the potential benefits of the ability to reuse and/or reclaim materials, products, and assemblies. The study will conduct hypothetical case studies for the three different modular construction types, and in doing so, control the parameters of location, climate, program, and client. By looking in-depth at the GWP of the beginning and end phases of various simulated modular buildings, it will be possible to make suggestions on which type of construction has the lowest global warming potential.Keywords: modular construction, offsite construction, life cycle analysis, global warming potential, environmental impact, circular economy
Procedia PDF Downloads 1666744 Some Theoretical Approaches on the Style of Lyrical Subject of the Confessional Poetry
Authors: Lemac Tin
Abstract:
This paper deals with the lyrical subject of the confessional poetry which is the main part of her stylistic strucuture. We concluded two types of this subject in the classical confessional poetic discourse; reflexive and authentic subject. We offer the model of their genesis, textual features and appeareance realisations. Genesis is related to the theories of deriving poetry from emotion and magic and their similar position in the primitive lyrics and lyrics of the ancient civilizations. Textual features are related to the emotive and semiotic analysis of each type. Appearance realisations of these two types are I-subject, We-subject, transvocal and objectified subject. We check this approaches on some of the poems from World literature.Keywords: confessional poetry, confessional lyrical subject, magic, emotion, emotive analysis, semiotic analysis
Procedia PDF Downloads 271