Search results for: representation selection
2931 An Efficient Stud Krill Herd Framework for Solving Non-Convex Economic Dispatch Problem
Authors: Bachir Bentouati, Lakhdar Chaib, Saliha Chettih, Gai-Ge Wang
Abstract:
The problem of economic dispatch (ED) is the basic problem of power framework, its main goal is to find the most favorable generation dispatch to generate each unit, reduce the whole power generation cost, and meet all system limitations. A heuristic algorithm, recently developed called Stud Krill Herd (SKH), has been employed in this paper to treat non-convex ED problems. The proposed KH has been modified using Stud selection and crossover (SSC) operator, to enhance the solution quality and avoid local optima. We are demonstrated SKH effects in two case study systems composed of 13-unit and 40-unit test systems to verify its performance and applicability in solving the ED problems. In the above systems, SKH can successfully obtain the best fuel generator and distribute the load requirements for the online generators. The results showed that the use of the proposed SKH method could reduce the total cost of generation and optimize the fulfillment of the load requirements.Keywords: stud krill herd, economic dispatch, crossover, stud selection, valve-point effect
Procedia PDF Downloads 1982930 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network
Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza
Abstract:
The aim of the present work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. Based on feature selection in different phases, in this research, we design a neural network system that has optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each ROI, 6 distinct set of texture features are extracted such as first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. We show that with the injection of liquid and the analysis of more phases the high relevant features in each region changed. Our results show that for detecting HCC tumor phase3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between these two classes according to our method, relates to first order histogram parameters with the accuracy of 85% in phase 1, 95% phase 2, and 95% in phase 3.Keywords: multi-phasic liver images, texture analysis, neural network, hidden layer
Procedia PDF Downloads 2622929 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules
Authors: Mohsen Maraoui
Abstract:
In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing
Procedia PDF Downloads 1412928 An Index to Measure Transportation Sustainable Performance in Construction Projects
Authors: Sareh Rajabi, Taha Anjamrooz, Salwa Bheiry
Abstract:
The continuous increase in the world population, resource shortage and the warning of climate change cause various environmental and social issues to the world. Thus, sustainability concept is much needed nowadays. Organizations are progressively falling under strong worldwide pressure to integrate sustainability practices into their project decision-making development. Construction projects in the industry are amongst the most significant, since it is one of the biggest divisions and of main significance for the national economy and hence has a massive effect on the environment and society. So, it is important to discover approaches to incorporate sustainability into the management of those projects. This study presents a combined sustainability index of projects with sustainable transportation which has been formed as per a comprehensive literature review and survey study. Transportation systems enable the movement of goods and services worldwide, and it is leading to economic growth and creating jobs while creating negative impacts on the environment and society. This research is study to quantify the sustainability indicators, through 1) identifying the importance of sustainable transportation indicators that are based on the sustainable practices used for the construction projects and 2) measure the effectiveness of practices through these indicators on the three sustainable pillars. A total 26 sustainability indicators have been selected and grouped under each related sustainability pillars. A survey was used to collect the opinion about the sustainability indicators by a scoring system. A combined sustainability index considering three sustainable pillars can be helpful in evaluating the transportation sustainable practices of a project and making decisions regarding project selection. In addition to focus on the issue of financial resource allocation in a project selection, the decision-maker could take into account the sustainability as an important key in addition to the project’s return and risk. The purpose of this study is to measure the performance of transportation sustainability which allow companies to assess multiple projects selection. This is useful to decision makers to rank and focus more on future sustainable projects.Keywords: sustainable transportation, transportation performances, sustainable indicators, sustainable construction practice, sustainability
Procedia PDF Downloads 1422927 Building Information Management in Context of Urban Spaces, Analysis of Current Use and Possibilities
Authors: Lucie Jirotková, Daniel Macek, Andrea Palazzo, Veronika Malinová
Abstract:
Currently, the implementation of 3D models in the construction industry is gaining popularity. Countries around the world are developing their own modelling standards and implement the use of 3D models into their individual permitting processes. Another theme that needs to be addressed are public building spaces and their subsequent maintenance, where the usage of BIM methodology is directly offered. The significant benefit of the implementation of Building Information Management is the information transfer. The 3D model contains not only the spatial representation of the item shapes but also various parameters that are assigned to the individual elements, which are easily traceable, mainly because they are all stored in one place in the BIM model. However, it is important to keep the data in the models up to date to achieve useability of the model throughout the life cycle of the building. It is now becoming standard practice to use BIM models in the construction of buildings, however, the building environment is very often neglected. Especially in large-scale development projects, the public space of buildings is often forwarded to municipalities, which obtains the ownership and are in charge of its maintenance. A 3D model of the building surroundings would include both the above-ground visible elements of the development as well as the underground parts, such as the technological facilities of water features, electricity lines for public lighting, etc. The paper shows the possibilities of a model in the field of information for the handover of premises, the following maintenance and decision making. The attributes and spatial representation of the individual elements make the model a reliable foundation for the creation of "Smart Cities". The paper analyses the current use of the BIM methodology and presents the state-of-the-art possibilities of development.Keywords: BIM model, urban space, BIM methodology, facility management
Procedia PDF Downloads 1242926 Ionic Liquid Membranes for CO2 Separation
Authors: Zuzana Sedláková, Magda Kárászová, Jiří Vejražka, Lenka Morávková, Pavel Izák
Abstract:
Membrane separations are mentioned frequently as a possibility for CO2 capture. Selectivity of ionic liquid membranes is strongly determined by different solubility of separated gases in ionic liquids. The solubility of separated gases usually varies over an order of magnitude, differently from diffusivity of gases in ionic liquids, which is usually of the same order of magnitude for different gases. The present work evaluates the selection of an appropriate ionic liquid for the selective membrane preparation based on the gas solubility in an ionic liquid. The current state of the art of CO2 capture patents and technologies based on the membrane separations was considered. An overview is given of the discussed transport mechanisms. Ionic liquids seem to be promising candidates thanks to their tunable properties, wide liquid range, reasonable thermal stability, and negligible vapor pressure. However, the uses of supported liquid membranes are limited by their relatively short lifetime from the industrial point of view. On the other hand, ionic liquids could overcome these problems due to their negligible vapor pressure and their tunable properties by adequate selection of the cation and anion.Keywords: biogas upgrading, carbon dioxide separation, ionic liquid membrane, transport properties
Procedia PDF Downloads 4312925 An Ensemble Deep Learning Architecture for Imbalanced Classification of Thoracic Surgery Patients
Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi
Abstract:
Selecting appropriate patients for surgery is one of the main issues in thoracic surgery (TS). Both short-term and long-term risks and benefits of surgery must be considered in the patient selection criteria. There are some limitations in the existing datasets of TS patients because of missing values of attributes and imbalanced distribution of survival classes. In this study, a novel ensemble architecture of deep learning networks is proposed based on stacking different linear and non-linear layers to deal with imbalance datasets. The categorical and numerical features are split using different layers with ability to shrink the unnecessary features. Then, after extracting the insight from the raw features, a novel biased-kernel layer is applied to reinforce the gradient of the minority class and cause the network to be trained better comparing the current methods. Finally, the performance and advantages of our proposed model over the existing models are examined for predicting patient survival after thoracic surgery using a real-life clinical data for lung cancer patients.Keywords: deep learning, ensemble models, imbalanced classification, lung cancer, TS patient selection
Procedia PDF Downloads 1452924 Biography and Psychotherapy: Oral History Interviews with Psychotherapists
Authors: Barbara Papp
Abstract:
Purpose: This article aims to rethink the relationship between the trauma and the choice of professions. By studying a homogenous sample of respondents, it seeks answers to the following question: how did personal losses that were caused by historical upheavals motivate people to enter the helping professions. By becoming helping professionals, the respondents of the survey sought to handle both historical representation and self-representation. How did psychotherapists working in the second half of the 20th century (Kádár-era in Hungary) shape their course of life? How did their family members respond to their choice of career? What forces supported or hindered them? How did they become professional helpers? Methodology: When interviewing 40 psychotherapists, the interviewer used the oral history technique. In-depth interviews were made with a focus on motivation. First, the collected material was examined using traditional content analysis tools: searching for content patterns, applying a word frequency analysis, and identifying the connections between key events and key persons. Second, a narrative psychological content analysis (NarrCat) was made. Findings: Interconnections were established between attachment, family and historical traumas and career choices. The history of the mid-20th-century period was traumatic and full of losses for the families of most of the psychotherapists concerned. Those experiences may have considerably influenced their choice of career. Working as helping therapists, they could get the opportunity to revise their losses. Conclusion: The results revealed core components that play a role in the psychotherapists’ choice of career, and also emphasized the importance of post-traumatic growth.Keywords: biography, identity, narrative psychological content analysis, psychotherapists, trauma
Procedia PDF Downloads 1372923 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 4122922 Deployment of Electronic Healthcare Records and Development of Big Data Analytics Capabilities in the Healthcare Industry: A Systematic Literature Review
Authors: Tigabu Dagne Akal
Abstract:
Electronic health records (EHRs) can help to store, maintain, and make the appropriate handling of patient histories for proper treatment and decision. Merging the EHRs with big data analytics (BDA) capabilities enable healthcare stakeholders to provide effective and efficient treatments for chronic diseases. Though there are huge opportunities and efforts that exist in the deployment of EMRs and the development of BDA, there are challenges in addressing resources and organizational capabilities that are required to achieve the competitive advantage and sustainability of EHRs and BDA. The resource-based view (RBV), information system (IS), and non- IS theories should be extended to examine organizational capabilities and resources which are required for successful data analytics in the healthcare industries. The main purpose of this study is to develop a conceptual framework for the development of healthcare BDA capabilities based on past works so that researchers can extend. The research question was formulated for the search strategy as a research methodology. The study selection was made at the end. Based on the study selection, the conceptual framework for the development of BDA capabilities in the healthcare settings was formulated.Keywords: EHR, EMR, Big data, Big data analytics, resource-based view
Procedia PDF Downloads 1312921 Methods of Improving Production Processes Based on Deming Cycle
Authors: Daniel Tochwin
Abstract:
Continuous improvement is an essential part of effective process performance management. In order to achieve continuous quality improvement, each organization must use the appropriate selection of tools and techniques. The basic condition for success is a proper understanding of the business need faced by the company and the selection of appropriate methods to improve a given production process. The main aim of this article is to analyze the methods of conduct which are popular in practice when implementing process improvements and then to determine whether the tested methods include repetitive systematics of the approach, i.e., a similar sequence of the same or similar actions. Based on an extensive literature review, 4 methods of continuous improvement of production processes were selected: A3 report, Gemba Kaizen, PDCA cycle, and Deming cycle. The research shows that all frequently used improvement methods are generally based on the PDCA cycle, and the differences are due to "(re)interpretation" and the need to adapt the continuous improvement approach to the specific business process. The research shows that all the frequently used improvement methods are generally based on the PDCA cycle, and the differences are due to "(re) interpretation" and the need to adapt the continuous improvement approach to the specific business process.Keywords: continuous improvement, lean methods, process improvement, PDCA
Procedia PDF Downloads 802920 Use of Analytic Hierarchy Process for Plant Site Selection
Authors: Muzaffar Shaikh, Shoaib Shaikh, Mark Moyou, Gaby Hawat
Abstract:
This paper presents the use of Analytic Hierarchy Process (AHP) in evaluating the site selection of a new plant by a corporation. Due to intense competition at a global level, multinational corporations are continuously striving to minimize production and shipping costs of their products. One key factor that plays significant role in cost minimization is where the production plant is located. In the U.S. for example, labor and land costs continue to be very high while they are much cheaper in countries such as India, China, Indonesia, etc. This is why many multinational U.S. corporations (e.g. General Electric, Caterpillar Inc., Ford, General Motors, etc.), have shifted their manufacturing plants outside. The continued expansion of the Internet and its availability along with technological advances in computer hardware and software all around the globe have facilitated U.S. corporations to expand abroad as they seek to reduce production cost. In particular, management of multinational corporations is constantly engaged in concentrating on countries at a broad level, or cities within specific countries where certain or all parts of their end products or the end products themselves can be manufactured cheaper than in the U.S. AHP is based on preference ratings of a specific decision maker who can be the Chief Operating Officer of a company or his/her designated data analytics engineer. It serves as a tool to first evaluate the plant site selection criteria and second, alternate plant sites themselves against these criteria in a systematic manner. Examples of site selection criteria are: Transportation Modes, Taxes, Energy Modes, Labor Force Availability, Labor Rates, Raw Material Availability, Political Stability, Land Costs, etc. As a necessary first step under AHP, evaluation criteria and alternate plant site countries are identified. Depending upon the fidelity of analysis, specific cities within a country can also be chosen as alternative facility locations. AHP experience in this type of analysis indicates that the initial analysis can be performed at the Country-level. Once a specific country is chosen via AHP, secondary analyses can be performed by selecting specific cities or counties within a country. AHP analysis is usually based on preferred ratings of a decision-maker (e.g., 1 to 5, 1 to 7, or 1 to 9, etc., where 1 means least preferred and a 5 means most preferred). The decision-maker assigns preferred ratings first, criterion vs. criterion and creates a Criteria Matrix. Next, he/she assigns preference ratings by alternative vs. alternative against each criterion. Once this data is collected, AHP is applied to first get the rank-ordering of criteria. Next, rank-ordering of alternatives is done against each criterion resulting in an Alternative Matrix. Finally, overall rank ordering of alternative facility locations is obtained by matrix multiplication of Alternative Matrix and Criteria Matrix. The most practical aspect of AHP is the ‘what if’ analysis that the decision-maker can conduct after the initial results to provide valuable sensitivity information of specific criteria to other criteria and alternatives.Keywords: analytic hierarchy process, multinational corporations, plant site selection, preference ratings
Procedia PDF Downloads 2882919 Optimizing Fire Tube Boiler Design for Efficient Saturated Steam Production: A Cost-Minimization Approach
Authors: Yoftahe Nigussie Worku
Abstract:
This report unveils a meticulous project focused on the design intricacies of a Fire Tube Boiler tailored for the efficient generation of saturated steam. The overarching objective is to produce 2000kg/h of saturated steam at 12-bar design pressure, achieved through the development of an advanced fire tube boiler. This design is meticulously crafted to harmonize cost-effectiveness and parameter refinement, with a keen emphasis on material selection for component parts, construction materials, and production methods throughout the analytical phases. The analytical process involves iterative calculations, utilizing pertinent formulas to optimize design parameters, including the selection of tube diameters and overall heat transfer coefficients. The boiler configuration incorporates two passes, a strategic choice influenced by tube and shell size considerations. The utilization of heavy oil fuel no. 6, with a higher heating value of 44000kJ/kg and a lower heating value of 41300kJ/kg, results in a fuel consumption of 140.37kg/hr. The boiler achieves an impressive heat output of 1610kW with an efficiency rating of 85.25%. The fluid flow pattern within the boiler adopts a cross-flow arrangement strategically chosen for inherent advantages. Internally, the welding of the tube sheet to the shell, secured by gaskets and welds, ensures structural integrity. The shell design adheres to European Standard code sections for pressure vessels, encompassing considerations for weight, supplementary accessories (lifting lugs, openings, ends, manhole), and detailed assembly drawings. This research represents a significant stride in optimizing fire tube boiler technology, balancing efficiency and safety considerations in the pursuit of enhanced saturated steam production.Keywords: fire tube, saturated steam, material selection, efficiency
Procedia PDF Downloads 802918 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images
Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann
Abstract:
FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design
Procedia PDF Downloads 2782917 Genetic Evaluation of Locally Flock Sheep in Gabaraka Village
Authors: Salim Omar Raoof
Abstract:
This study was conducted in a private local sheep herd at Gabaraka village-Kirkuk-Iraq. Analysis of 77 ewes recorded and 7 Rams of local sheep presented in Gabaraka village farm plain, the age of ewes ranged between (2-4) years. The aim of this study is to investigate the genetic and non-genetic factors (type of birth, sex, and age of dam) affecting daily milk yield (DMY), birth weight (BW), weaning weight (WW) and Gain characteristics of local sheep raised under Iraq conditions, and it also aims at estimating heritability’s, BLUP. The overall mean of daily milk yield, (BW), (WW), and gain. Was 444.15gm,4.92kg,43.08kg, and 38.16kg, respectively. The results showed there was a significant effect of the type of birth and sex on (BW) and (WW). Also, the age of the dam had a significant effect on daily milk yield (BW), (WW), and gain. Generally, the estimate of heritability of DMP, BWT, WWT, and Gain tend to be 0.22, 0.17, 0.27, and 0.22, respectively. The breeding value (BLUP) for rams ranged between (-0.1684 to 0.188), (-0.205 to 0.310), and ( -0.0171 to 0.029) according to growth traits of Lambs BW, WW, and Gain, respectively. It concluded that the selection of ewes and rams at the population level in planned selection schemes is based on BLUP value and heritability.Keywords: locally sheep, milk yield, Genetic parameters, BLUP value
Procedia PDF Downloads 772916 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision
Procedia PDF Downloads 1252915 Evaluation of Reliability Flood Control System Based on Uncertainty of Flood Discharge, Case Study Wulan River, Central Java, Indonesia
Authors: Anik Sarminingsih, Krishna V. Pradana
Abstract:
The failure of flood control system can be caused by various factors, such as not considering the uncertainty of designed flood causing the capacity of the flood control system is exceeded. The presence of the uncertainty factor is recognized as a serious issue in hydrological studies. Uncertainty in hydrological analysis is influenced by many factors, starting from reading water elevation data, rainfall data, selection of method of analysis, etc. In hydrological modeling selection of models and parameters corresponding to the watershed conditions should be evaluated by the hydraulic model in the river as a drainage channel. River cross-section capacity is the first defense in knowing the reliability of the flood control system. Reliability of river capacity describes the potential magnitude of flood risk. Case study in this research is Wulan River in Central Java. This river occurring flood almost every year despite some efforts to control floods such as levee, floodway and diversion. The flood-affected areas include several sub-districts, mainly in Kabupaten Kudus and Kabupaten Demak. First step is analyze the frequency of discharge observation from Klambu weir which have time series data from 1951-2013. Frequency analysis is performed using several distribution frequency models such as Gumbel distribution, Normal, Normal Log, Pearson Type III and Log Pearson. The result of the model based on standard deviation overlaps, so the maximum flood discharge from the lower return periods may be worth more than the average discharge for larger return periods. The next step is to perform a hydraulic analysis to evaluate the reliability of river capacity based on the flood discharge resulted from several methods. The selection of the design flood discharge of flood control system is the result of the method closest to bankfull capacity of the river.Keywords: design flood, hydrological model, reliability, uncertainty, Wulan river
Procedia PDF Downloads 2942914 Studies on Mechanical Behavior of Kevlar/Kenaf/Graphene Reinforced Polymer Based Hybrid Composites
Authors: H. K. Shivanand, Ranjith R. Hombal, Paraveej Shirahatti, Gujjalla Anil Babu, S. ShivaPrakash
Abstract:
When it comes to the selection of materials the knowledge of materials science plays a vital role in selection and enhancements of materials properties. In the world of material science a composite material has the significant role based on its application. The composite materials are those in which two or more components having different physical and chemical properties are combined to create a new enhanced property substance. In this study three different materials (Kenaf, Kevlar and Graphene) been chosen based on their properties and a composite material is developed with help of vacuum bagging process. The fibers (Kenaf and Kevlar) and Resin(vinyl ester) ratio was maintained at 70:30 during the process and 0.5% 1% and 1.5% of Graphene was added during fabrication process. The material was machined to thedimension ofASTM standards(300×300mm and thickness 3mm)with help of water jet cutting machine. The composite materials were tested for Mechanical properties such as Interlaminar shear strength(ILSS) and Flexural strength. It is found that there is significant increase in material properties in the developed composite material.Keywords: Kevlar, Kenaf, graphene, vacuum bagging process, Interlaminar shear strength test, flexural test
Procedia PDF Downloads 932913 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 2812912 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning
Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah
Abstract:
Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning
Procedia PDF Downloads 322911 Representation of Dalits and Tribal Communities in Psychological Autopsy in India: A Systematic Scoping Review
Authors: Anagha Pavithran Vattamparambil, Niranjana Regimon
Abstract:
Dalit and tribal communities in India have the largest suicide rate; however, the current literature does not reflect this reality. While existing research acknowledges socio-cultural risk factors, it fails to discuss structural issues pertaining to marginalized communities in India. Furthermore, the language is framed in an individualistic manner which denies room for recognizing systemic violence and injustice among causative agents of suicide. We aim to examine the representation of Dalit and tribal identities and their experiences of marginalisation as a contributive factor of suicide, as well as discuss the epistemic injustice involved in its exclusion. Electronic searches of PubMed, PsychInfo, and Web of Science databases will be carried out from inception till January 2023 to conduct a systematic scoping review of peer-reviewed articles; it will include all studies involving psychological autopsy in India. A narrative synthesis will be performed to gain insight into the inclusion of the experiences of Dalits and Tribals, the absence of which indicates a lacking understanding of suicide in India. It is also expected to highlight the alienation of lived experiences and narratives of marginalisation from mainstream discourse on suicide that constitutes epistemic injustice. There is a complex interplay of psychological, socio-cultural, economic, and political factors for suicide in the Indian setting. But, political and systemic issues are often downplayed in suicide etiology, including casteist assault, rape, violence, public humiliation, and discrimination which deserves more research attention.Keywords: dalits, marginalisation, psychological autopsy, suicide, tribals
Procedia PDF Downloads 882910 Primary Level Teachers’ Response to Gender Representation in Textbook Contents
Authors: Pragya Paneru
Abstract:
This paper explores ten primary teachers’ views on gender representation in primary-level textbooks altogether. Data was collected from the teachers who taught in private schools in Kailali and Kathmandu District. This research uses a semi-structured interview method to obtain information regarding teachers’ attitudes toward gender representations in textbook content. The interview data were analysed by using critical skills of qualitative research analysis methods, as suggested by Saldana and Omasta (2018). The findings revealed that most of the teachers were unaware and regarded gender issues as insignificant to discuss in primary-level classes. Most of them responded to the questions personally and claimed that there were no gender issues in their classrooms. Some of the teachers connected gender issues with contexts other than textbook representations, such as school discrimination in the distribution of salary among male and female teachers, school practices of awarding girls rather than boys as the most disciplined students, following girls’ first rule in the assembly marching, encouraging only girls in the stage shows, and involving students in gender-specific activities such as decorating works for girls and physical tasks for boys. The interview also revealed teachers’ covert gendered attitudes in their remarks. Nevertheless, most of the teachers accepted that gender-biased contents have an impact on learners, and this problem can be solved with more gender-centred research in the education field, discussions, and training to increase awareness regarding gender issues. Agreeing with the suggestion of teachers, this paper recommends proper training and awareness regarding how to confront gender issues in textbooks.Keywords: content analysis, gender equality, school education, critical awareness
Procedia PDF Downloads 952909 Naturally Occurring Chemicals in Biopesticides' Resistance Control through Molecular Topology
Authors: Riccardo Zanni, Maria Galvez-Llompart, Ramon Garcia-Domenech, Jorge Galvez
Abstract:
Biopesticides, such as naturally occurring chemicals, pheromones, fungi, bacteria and insect predators are often a winning choice in crop protection because of their environmental friendly profile. They are considered to have lower toxicity than traditional pesticides. After almost a century of pesticides use, resistances to traditional insecticides are wide spread, while those to bioinsecticides have raised less attention, and resistance management is frequently neglected. This seems to be a crucial mistake since resistances have already occurred for many marketed biopesticides. With an eye to the future, we present here a selection of new natural occurring chemicals as potential bioinsecticides. The molecules were selected using a consolidated mathematical paradigm called molecular topology. Several QSAR equations were depicted and subsequently applied for the virtual screening of hundred thousands molecules of natural origin, which resulted in the selection of new potential bioinsecticides. The most innovative aspect of this work does not only reside in the importance of the identification of new molecules overcoming biopesticides’ resistances, but on the possibility to promote shared knowledge in the field of green chemistry through this unique in silico discipline named molecular topology.Keywords: green chemistry, QSAR, molecular topology, biopesticide
Procedia PDF Downloads 3142908 Structural Optimization of Shell and Arched Structures
Authors: Mitchell Gohnert, Ryan Bradley
Abstract:
This paper reviews some fundamental concepts of structural optimization, which are based on the type of materials used in construction and the shape of the structure. The first step in structural optimization is to break down all internal forces in a structure into fundamental stresses, which are tensions and compressions. Knowing the stress patterns directs our selection of structural shapes and the most appropriate type of construction material. In our selection of materials, it is essential to understand all construction materials have flaws, or micro-cracks, which reduce the capacity of the material, especially when subjected to tensions. Because of material defects, many construction materials perform significantly better when subjected to compressive forces. Structures are also more efficient if bending moments are eliminated. Bending stresses produce high peak stresses at each face of the member, and therefore, substantially more material is required to resist bending. The shape of the structure also has a profound effect on stress levels. Stress may be reduced dramatically by simply changing the shape. Catenary, triangular and linear shapes are the fundamental structural forms to achieve optimal stress flow. If the natural flow of stress matches the shape of the structures, the most optimal shape is determined.Keywords: arches, economy of stresses, material strength, optimization, shells
Procedia PDF Downloads 1162907 Voting Representation in Social Networks Using Rough Set Techniques
Authors: Yasser F. Hassan
Abstract:
Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices
Procedia PDF Downloads 3932906 A Review of Research on Pre-training Technology for Natural Language Processing
Authors: Moquan Gong
Abstract:
In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.Keywords: natural language processing, pre-training, language model, word vectors
Procedia PDF Downloads 572905 A Graph Library Development Based on the Service-Oriented Architecture: Used for Representation of the Biological Systems in the Computer Algorithms
Authors: Mehrshad Khosraviani, Sepehr Najjarpour
Abstract:
Considering the usage of graph-based approaches in systems and synthetic biology, and the various types of the graphs employed by them, a comprehensive graph library based on the three-tier architecture (3TA) was previously introduced for full representation of the biological systems. Although proposing a 3TA-based graph library, three following reasons motivated us to redesign the graph library based on the service-oriented architecture (SOA): (1) Maintaining the accuracy of the data related to an input graph (including its edges, its vertices, its topology, etc.) without involving the end user: Since, in the case of using 3TA, the library files are available to the end users, they may be utilized incorrectly, and consequently, the invalid graph data will be provided to the computer algorithms. However, considering the usage of the SOA, the operation of the graph registration is specified as a service by encapsulation of the library files. In other words, overall control operations needed for registration of the valid data will be the responsibility of the services. (2) Partitioning of the library product into some different parts: Considering 3TA, a whole library product was provided in general. While here, the product can be divided into smaller ones, such as an AND/OR graph drawing service, and each one can be provided individually. As a result, the end user will be able to select any parts of the library product, instead of all features, to add it to a project. (3) Reduction of the complexities: While using 3TA, several other libraries must be needed to add for connecting to the database, responsibility of the provision of the needed library resources in the SOA-based graph library is entrusted with the services by themselves. Therefore, the end user who wants to use the graph library is not involved with its complexity. In the end, in order to make the library easier to control in the system, and to restrict the end user from accessing the files, it was preferred to use the service-oriented architecture (SOA) over the three-tier architecture (3TA) and to redevelop the previously proposed graph library based on it.Keywords: Bio-Design Automation, Biological System, Graph Library, Service-Oriented Architecture, Systems and Synthetic Biology
Procedia PDF Downloads 3112904 Incorporating Spatial Selection Criteria with Decision-Maker Preferences of A Precast Manufacturing Plant
Authors: M. N. A. Azman, M. S. S. Ahamad
Abstract:
The Construction Industry Development Board of Malaysia has been actively promoting the use of precast manufacturing in the local construction industry over the last decade. In an era of rapid technological changes, precast manufacturing significantly contributes to improving construction activities and ensuring sustainable economic growth. Current studies on the location decision of precast manufacturing plants aimed to enhanced local economic development are scarce. To address this gap, the present research establishes a new set of spatial criteria, such as attribute maps and preference weights, derived from a survey of local industry decision makers. These data represent the input parameters for the MCE-GIS site selection model, for which the weighted linear combination method is used. Verification tests on the model were conducted to determine the potential precast manufacturing sites in the state of Penang, Malaysia. The tests yield a predicted area of 12.87 acres located within a designated industrial zone. Although, the model is developed specifically for precast manufacturing plant but nevertheless it can be employed to other types of industries by following the methodology and guidelines proposed in the present research.Keywords: geographical information system, multi criteria evaluation, industrialised building system, civil engineering
Procedia PDF Downloads 2872903 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 1752902 Investigating the Effective Parameters in Determining the Type of Traffic Congestion Pricing Schemes in Urban Streets
Authors: Saeed Sayyad Hagh Shomar
Abstract:
Traffic congestion pricing – as a strategy in travel demand management in urban areas to reduce traffic congestion, air pollution and noise pollution – has drawn many attentions towards itself. Unlike the satisfying findings in this method, there are still problems in determining the best functional congestion pricing scheme with regard to the situation. The so-called problems in this process will result in further complications and even the scheme failure. That is why having proper knowledge of the significance of congestion pricing schemes and the effective factors in choosing them can lead to the success of this strategy. In this study, first, a variety of traffic congestion pricing schemes and their components are introduced; then, their functional usage is discussed. Next, by analyzing and comparing the barriers, limitations and advantages, the selection criteria of pricing schemes are described. The results, accordingly, show that the selection of the best scheme depends on various parameters. Finally, based on examining the effective parameters, it is concluded that the implementation of area-based schemes (cordon and zonal) has been more successful in non-diversion of traffic. That is considering the topology of the cities and the fact that traffic congestion is often created in the city centers, area-based schemes would be notably functional and appropriate.Keywords: congestion pricing, demand management, flat toll, variable toll
Procedia PDF Downloads 390