Search results for: form feature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7495

Search results for: form feature

6985 Abstract- Mandible Fractures- A Simple Adjunct to Inform Consent

Authors: Emma Carr, Bilal Aslam-Pervez, David Laraway

Abstract:

Litigation against surgeons and hospitals continues to increase in Western countries. While verbal consent is all that is required legally, it has for a long time been considered that written consent offers proof of discussion and interaction between the surgeon and the patient. Inadequate consenting of patients continues in the United Kingdom leaving surgeons and Health Trusts open to litigation. We present a standardised consent form which improves patient autonomy and engagement. The General Medical Council recommends that all material risks relevant to the patient are discussed and recorded prior to undergoing surgery, regardless of how likely they are to occur. Current literature was reviewed to evaluate complications associated with surgical management of mandible fractures. Analysis of risks on 52 consent forms were analysed within the Glasgow OMFS department, leading to a procedure-specific form being designed and implemented. This audit showed that the documentation of risks on consent forms was extremely variable- with uncommon risks not being recorded. Interestingly, not a single consent form was found which highlighted all the risks associated with mandible fractures. Our re-audit data confirms 100% of risks being discussed when a procedure specific form is utilised. Our hope, is to introduce further forms for inclusion on the BAOMS website and peripheral distribution. The forms are quick and easy to print and leave more time for consultation with the patient. Whilst we are under no illusion that the forms may not decrease the incidence of intended litigation, we feel confident that they will decrease the chances of it being successful.

Keywords: consent, litigation, mandible fracture, surgery

Procedia PDF Downloads 188
6984 Design for Sentiment-ancy: Conceptual Framework to Improve User’s Well-being Through Fostering Emotional Attachment in the Use Experience with Their Assistive Devices

Authors: Seba Quqandi

Abstract:

This study investigates the bond that people form using their assistive devices and the tactics applied during the product design process to help improve the user experience leading to a long-term product relationship. The aim is to develop a conceptual framework with which to describe and analyze the bond people form with their assistive devices and to integrate human emotions as a factor during the development of the product design process. The focus will be on the assistive technology market, namely, the Aid-For-Daily-Living market for situational impairments, to increase the quality of wellbeing. Findings will help us better understand the real issues of the product experience concerning people’s interaction throughout the product performance, establish awareness of the emotional effects in the daily interaction that fosters the product attachment, and help product developers and future designers create a connection between users and their assistive devices. The research concludes by discussing the implications of these findings for professionals and academics in the form of experiments in order to identify new areas that can stimulate new /or developed design directions.

Keywords: experience design, interaction design, emotion, design psychology, assistive tools, customization, userentred design

Procedia PDF Downloads 230
6983 On the Approximate Solution of Continuous Coefficients for Solving Third Order Ordinary Differential Equations

Authors: A. M. Sagir

Abstract:

This paper derived four newly schemes which are combined in order to form an accurate and efficient block method for parallel or sequential solution of third order ordinary differential equations of the form y^'''= f(x,y,y^',y^'' ), y(α)=y_0,〖y〗^' (α)=β,y^('' ) (α)=μ with associated initial or boundary conditions. The implementation strategies of the derived method have shown that the block method is found to be consistent, zero stable and hence convergent. The derived schemes were tested on stiff and non-stiff ordinary differential equations, and the numerical results obtained compared favorably with the exact solution.

Keywords: block method, hybrid, linear multistep, self-starting, third order ordinary differential equations

Procedia PDF Downloads 271
6982 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation

Authors: Afshin Kadri

Abstract:

Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.

Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter

Procedia PDF Downloads 76
6981 Two-Photon-Exchange Effects in the Electromagnetic Production of Pions

Authors: Hui-Yun Cao, Hai-Qing Zhou

Abstract:

The high precision measurements and experiments play more and more important roles in particle physics and atomic physics. To analyse the precise experimental data sets, the corresponding precise and reliable theoretical calculations are necessary. Until now, the form factors of elemental constituents such as pion and proton are still attractive issues in current Quantum Chromodynamics (QCD). In this work, the two-photon-exchange (TPE) effects in ep→enπ⁺ at small -t are discussed within a hadronic model. Under the pion dominance approximation and the limit mₑ→0, the TPE contribution to the amplitude can be described by a scalar function. We calculate TPE contributions to the amplitude, and the unpolarized differential cross section with the only elastic intermediate state is considered. The results show that the TPE corrections to the unpolarized differential cross section are about from -4% to -20% at Q²=1-1.6 GeV². After considering the TPE corrections to the experimental data sets of unpolarized differential cross section, we analyze the TPE corrections to the separated cross sections σ(L,T,LT,TT). We find that the TPE corrections (at Q²=1-1.6 GeV²) to σL are about from -10% to -30%, to σT are about 20%, and to σ(LT,TT) are much larger. By these analyses, we conclude that the TPE contributions in ep→enπ⁺ at small -t are important to extract the separated cross sections σ(L,T,LT,TT) and the electromagnetic form factor of π⁺ in the experimental analysis.

Keywords: differential cross section, form factor, hadronic, two-photon

Procedia PDF Downloads 133
6980 A Phenomenological Expression for Self-Attractive Energy of Singlelayer Graphene Sheets

Authors: Bingjie Wu, C. Q. Ru

Abstract:

The present work studies several reasonably expected candidate integral forms for self-attractive potential energy of a free monolayer graphene sheet. The admissibility of a specific integral form for ripple formation is verified, while all others most of the candidate integral forms are rejected based on the non-existence of stable periodic ripples. Based on the selected integral form of self-attractive potential energy, some mechanical behavior, including ripple formation and buckling, of a free monolayer grapheme sheet are discussed in details

Keywords: graphene, monolayer, ripples, van der Waals energy

Procedia PDF Downloads 392
6979 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph

Authors: Zhifei Hu, Feng Xia

Abstract:

In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.

Keywords: graph attention network, knowledge graph, recommendation, information propagation

Procedia PDF Downloads 116
6978 A Conceptual Analysis of Right of Taxpayers to Claim Refund in Nigeria

Authors: Hafsat Iyabo Sa'adu

Abstract:

A salient feature of the Nigerian Tax Law is the right of the taxpayer to demand for a refund where excess tax is paid. Section 23 of the Federal Inland Revenue Service (Establishment) Act, 2007 vests Federal Inland Revenue Services with the power to make tax refund as well as set guidelines and requirements for refund process from time to time. In addition, Section 61 of the Federal Inland Revenue Service (Establishment) Act, 2007, empowers the Federal Inland Revenue Services to issue information circular to acquaint stakeholders with the policy on the refund process. A Circular was issued to that effect to correct the position that until after the annual audit of the Service before such excess can be paid to the claimant/taxpayer. But it is amazing that such circular issuance does not feature under the states’ laws. Hence, there is an inconsistencies in the tax paying system in Nigeria. This study, therefore, sets an objective, to examine the trending concept of tax refund in Nigeria. In order to achieve this set objective, a doctrinal study went under way, wherein both federal and states laws were consulted including journals and textbooks. At the end of the research, it was revealed that the law should be specific as to the time frame within which to make the refund. It further revealed that it is essential to put up a legal framework for the tax system to recognize excess payment as debt due from the state. This would provide a foundational framework for the relationship between taxpayers and Federal Inland Revenue Service as well as promote effective tax administration in all the states of the federation. Several Recommendations were made especially relating to legislative passage of ‘’Refund Circular Bill at the states levels’ pursuant to the Federal Inland Revenue Service (Establishment) Act, 2007.

Keywords: claim, Nigeria, refund, right

Procedia PDF Downloads 118
6977 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting

Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi

Abstract:

An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.

Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power

Procedia PDF Downloads 411
6976 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 605
6975 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 59
6974 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement

Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini

Abstract:

Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.

Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis

Procedia PDF Downloads 138
6973 Using the Minnesota Multiphasic Personality Inventory-2 and Mini Mental State Examination-2 in Cognitive Behavioral Therapy: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

From a psychological perspective, psychopathology is the area of clinical psychology that has at its core psychological assessment and psychotherapy. In day-to-day clinical practice, psychodiagnosis and psychotherapy are used independently, according to their intended purpose and their specific methods of application. The paper explores how the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Mini Mental State Examination-2 (MMSE-2) psychological tools contribute to enhancing the effectiveness of cognitive behavioral psychotherapy (CBT). This combined approach, psychotherapy in conjunction with assessment of personality and cognitive functions, is illustrated by two cases, a severe depressive episode with psychotic symptoms and a mixed anxiety-depressive disorder. The order in which CBT, MMPI-2, and MMSE-2 were used in the diagnostic and therapeutic process was determined by the particularities of each case. In the first case, the sequence started with psychotherapy, followed by the administration of blue form MMSE-2, MMPI-2, and red form MMSE-2. In the second case, the cognitive screening with blue form MMSE-2 led to a personality assessment using MMPI-2, followed by red form MMSE-2; reapplication of the MMPI-2 due to the invalidation of the first profile, and finally, psychotherapy. The MMPI-2 protocols gathered useful information that directed the steps of therapeutic intervention: a detailed symptom picture of potentially self-destructive thoughts and behaviors otherwise undetected during the interview. The memory loss and poor concentration were confirmed by MMSE-2 cognitive screening. This combined approach, psychotherapy with psychological assessment, aligns with the trend of adaptation of the psychological services to the everyday life of contemporary man and paves the way for deepening and developing the field.

Keywords: assessment, cognitive behavioral psychotherapy, MMPI-2, MMSE-2, psychopathology

Procedia PDF Downloads 326
6972 Knitting Stitches’ Manipulation for Catenary Textile Structures

Authors: Virginia Melnyk

Abstract:

This paper explores the design for catenary structure using knitted textiles. Using the advantages of Grasshopper and Kangaroo parametric software to simulate and pre-design an overall form, the design is then translated to a pattern that can be made with hand manipulated stitches on a knitting machine. The textile takes advantage of the structure of knitted materials and the ability for it to stretch. Using different types of stitches to control the amount of stretch that can occur in portions of the textile generates an overall formal design. The textile is then hardened in an upside-down hanging position and then flipped right-side-up. This then becomes a structural catenary form. The resulting design is used as a small Cat House for a cat to sit inside and climb on top of.

Keywords: architectural materials, catenary structures, knitting fabrication, textile design

Procedia PDF Downloads 183
6971 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 150
6970 Solutions for Comfort and Safety on Vibrations Resulting from the Action of the Wind on the Building in the Form of Portico with Four Floors

Authors: G. B. M. Carvalho, V. A. C. Vale, E. T. L. Cöuras Ford

Abstract:

With the aim of increasing the levels of comfort and security structures, the study of dynamic loads on buildings has been one of the focuses in the area of control engineering, civil engineering and architecture. Thus, this work presents a study based on simulation of the dynamics of buildings in the form of portico subjected to wind action, besides presenting an action of passive control, using for this the dynamics of the structure, consequently representing a system appropriated on environmental issues. These control systems are named the dynamic vibration absorbers.

Keywords: dynamic vibration absorber, structure, comfort, safety, wind behavior, structure

Procedia PDF Downloads 407
6969 Research on Container Housing: A New Form of Informal Housing on Urban Temporary Land

Authors: Lufei Mao, Hongwei Chen, Zijiao Chai

Abstract:

Informal housing is a widespread phenomenon in developing countries. In many newly-emerging cities in China, rapid urbanization leads to an influx of population as well as a shortage of housing. Under this background, container housing, a new form of informal housing, gradually appears on a small scale on urban temporary land in recent years. Container housing, just as its name implies, transforms containers into small houses that allow migrant workers group to live in it. Scholars in other countries have established sound theoretical frameworks for informal housing study, but the research fruits seem rather limited on this small scale housing form. Unlike the cases in developed countries, these houses, which are outside urban planning, bring about various environmental, economic, social and governance issues. Aiming to figure out this new-born housing form, a survey mainly on two container housing settlements in Hangzhou, China was carried out to gather the information of them. Based on this thorough survey, the paper concludes the features and problems of infrastructure, environment and social communication of container housing settlements. The result shows that these containers were lacking of basic facilities and were restricted in a small mess temporary land. Moreover, because of the deficiency in management, the rental rights of these containers might not be guaranteed. Then the paper analyzes the factors affecting the formation and evolution of container housing settlements. It turns out that institutional and policy factors, market factors and social factors were the main three factors that affect the formation. At last, the paper proposes some suggestions for the governance of container housing and the utility pattern of urban temporary land.

Keywords: container housing, informal housing, urban temporary land, urban governance

Procedia PDF Downloads 257
6968 Thermal Simulation for Urban Planning in Early Design Phases

Authors: Diego A. Romero Espinosa

Abstract:

Thermal simulations are used to evaluate comfort and energy consumption of buildings. However, the performance of different urban forms cannot be assessed precisely if an environmental control system and user schedules are considered. The outcome of such analysis would lead to conclusions that combine the building use, operation, services, envelope, orientation and density of the urban fabric. The influence of these factors varies during the life cycle of a building. The orientation, as well as the surroundings, can be considered a constant during the lifetime of a building. The structure impacts the thermal inertia and has the largest lifespan of all the building components. On the other hand, the building envelope is the most frequent renovated component of a building since it has a great impact on energy performance and comfort. Building services have a shorter lifespan and are replaced regularly. With the purpose of addressing the performance, an urban form, a specific orientation, and density, a thermal simulation method were developed. The solar irradiation is taken into consideration depending on the outdoor temperature. Incoming irradiation at low temperatures has a positive impact increasing the indoor temperature. Consequently, overheating would be the combination of high outdoor temperature and high irradiation at the façade. On this basis, the indoor temperature is simulated for a specific orientation of the evaluated urban form. Thermal inertia and building envelope performance are considered additionally as the materiality of the building. The results of different thermal zones are summarized using the 'Degree day method' for cooling and heating. During the early phase of a design process for a project, such as Masterplan, conclusions regarding urban form, density and materiality can be drawn by means of this analysis.

Keywords: building envelope, density, masterplanning, urban form

Procedia PDF Downloads 145
6967 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 118
6966 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 339
6965 Evaluation of Corrosion by Impedance Spectroscopy of Embedded Steel in an Alternative Concrete Exposed a Chloride Ion

Authors: E. Ruíz, W. Aperador

Abstract:

In this article evaluates the protective effect of the concrete alternative obtained from the fly ash and iron and steel slag mixed in binary form and were placed on structural steel ASTM A 706. The study was conducted comparatively with specimens exposed to natural conditions free of chloride ion. The effect of chloride ion on the specimens was generated of form accelerated under controlled conditions (3.5% NaCl and 25 ° C temperature). The Impedance data were acquired over a range of 1 mHz to 100 kHz. At frequencies high is found the response of the interface means of the exposure-concrete and to frequency low the response of the interface corresponding to concrete-steel.

Keywords: alternative concrete, corrosion, alkaline activation, impedance spectroscopy

Procedia PDF Downloads 359
6964 Influence of Brazing Process Parameters on the Mechanical Properties of Nickel Based Superalloy

Authors: M. Zielinska, B. Daniels, J. Gabel, A. Paletko

Abstract:

A common nickel based superalloy Inconel625 was brazed with Ni-base braze filler material (AMS4777) containing melting-point-depressants such as B and Si. Different braze gaps, brazing times and forms of braze filler material were tested. It was determined that the melting point depressants B and Si tend to form hard and brittle phases in the joint during the braze cycle. Brittle phases significantly reduce mechanical properties (e. g. tensile strength) of the joint. Therefore, it is important to define optimal process parameters to achieve high strength joints, free of brittle phases. High ultimate tensile strength (UTS) values can be obtained if the joint area is free of brittle phases, which is equivalent to a complete isothermal solidification of the joint. Isothermal solidification takes place only if the concentration of the melting point depressant in the braze filler material of the joint is continuously reduced by diffusion into the base material. For a given brazing temperature, long brazing times and small braze filler material volumes (small braze gaps) are beneficial for isothermal solidification. On the base of the obtained results it can be stated that the form of the braze filler material has an additional influence on the joint quality. Better properties can be achieved by the use of braze-filler-material in form of foil instead of braze-filler-material in form of paste due to a reduced amount of voids and a more homogeneous braze-filler-material-composition in the braze-gap by using foil.

Keywords: diffusion brazing, microstructure, superalloy, tensile strength

Procedia PDF Downloads 364
6963 Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe

Authors: Babatunde Olumide Olawale, Oyebode Olumide Oyediran

Abstract:

The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe.

Keywords: access control, multimodal biometrics, pattern recognition, security safe

Procedia PDF Downloads 335
6962 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 158
6961 Environmental Degradation and Globalization with Special Reference to Developing Economics

Authors: Indira Sinha

Abstract:

According to the Oxford Advanced Learner's English Dictionary of Current English, environment is the complex of physical, chemical and biotic factors that act upon an organism or an ecological community and ultimately determines its form and survival. It is defined as conditions and circumstances which are affecting people's lives. The meaning of environmental degradation is the degradation of the environment through depletion of resources such as air, water and soil and the destruction of ecosystems and extinction of wildlife. Globalization is a significant feature of recent world history. The aim of this phenomenon is to integrate societies, economies and cultures through a common link of trading policies, technology and communication. Undoubtedly it has opened up the world economy at a very high speed but at the same time it has an adverse impact on the environment. The purpose of the present study is to investigate the impact of globalization on the environmental conditions. An overview of what the forces of globalization have in store for the environment with constructing large number of industries and destroying large forests lands will be given in this paper. The forces of globalization have created many serious environmental problems like high temperature, extinction of many species of plant and animal and outlet of poisonous chemicals from industries. The revelation of this study is that in case of developing economics these problems are more critical. In developing countries like India many factories are built with less environmental regulations, while developed economies maintain positive environmental practices. The present study is a micro level study which aims to employ a combination of theoretical, descriptive, empirical and analytical approach in addition to the time tested case method.

Keywords: globalization, trade policies, environmental degradation, developing economies, large industries

Procedia PDF Downloads 239
6960 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 467
6959 Application of Federated Learning in the Health Care Sector for Malware Detection and Mitigation Using Software-Defined Networking Approach

Authors: A. Dinelka Panagoda, Bathiya Bandara, Chamod Wijetunga, Chathura Malinda, Lakmal Rupasinghe, Chethana Liyanapathirana

Abstract:

This research takes us forward with the concepts of Federated Learning and Software-Defined Networking (SDN) to introduce an efficient malware detection technique and provide a mitigation mechanism to give birth to a resilient and automated healthcare sector network system by also adding the feature of extended privacy preservation. Due to the daily transformation of new malware attacks on hospital Integrated Clinical Environment (ICEs), the healthcare industry is at an undefinable peak of never knowing its continuity direction. The state of blindness by the array of indispensable opportunities that new medical device inventions and their connected coordination offer daily, a factor that should be focused driven is not yet entirely understood by most healthcare operators and patients. This solution has the involvement of four clients in the form of hospital networks to build up the federated learning experimentation architectural structure with different geographical participation to reach the most reasonable accuracy rate with privacy preservation. While the logistic regression with cross-entropy conveys the detection, SDN comes in handy in the second half of the research to stack up the initial development phases of the system with malware mitigation based on policy implementation. The overall evaluation sums up with a system that proves the accuracy with the added privacy. It is no longer needed to continue with traditional centralized systems that offer almost everything but not privacy.

Keywords: software-defined network, federated learning, privacy, integrated clinical environment, decentralized learning, malware detection, malware mitigation

Procedia PDF Downloads 187
6958 The Numerical and Experimental Analysis of Compressed Composite Plate in Asymmetrical Arrangement of Layers

Authors: Katarzyna Falkowicz

Abstract:

The work focused on the original concept of a thin-walled plate element with a cut-out, for use as a spring or load-bearing element. The subject of the study were rectangular plates with a cut-out with variable geometrical parameters and with a variable angle of fiber arrangement, made of a carbon-epoxy composite with high strength properties in an asymmetrical arrangement, subjected to uniform compression. The influence of geometrical parameters of the cut-out and the angle of fiber arrangement on the value of critical load of the structure and buckling form was investigated. Uniform thin plates are relatively cheap to manufacture, however due to their low bending stiffness; they can carry relatively small loads. The lowest form of loss of plate stability, which is the bending form, leads to its rapid destruction due to high deflection increases, with a slight increase in compressive load - low rigidity of the structure. However, the stiffness characteristics of the structure change significantly when the work of plate is forcing according to the higher flexural-torsional form of buckling. The plate is able to carry a much higher compressive load while maintaining much stiffer work characteristics in the post-critical range. The calculations carried out earlier show that plates with forced higher form of buckling are characterized by stable, progressive paths of post-critical equilibrium, enabling their use as elastic elements. The characteristics of such elements can be designed in a wide range by changing the geometrical parameters of the cut-out, i.e. height and width as well as by changing the angle of fiber arrangement The commercial ABAQUS program using the finite element method was used to develop the discrete model and perform numerical calculations. The obtained results are of significant practical importance in the design of structures with elastic elements, allowing to achieve the required maintenance characteristics of the device.

Keywords: buckling mode, numerical method, unsymmetrical laminates, thin-walled elastic elements

Procedia PDF Downloads 105
6957 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision

Procedia PDF Downloads 125
6956 Ecotoxicity Evaluation and Suggestion of Remediation Method of ZnO Nanoparticles in Aqueous Phase

Authors: Hyunsang Kim, Younghun Kim, Younghee Kim, Sangku Lee

Abstract:

We investigated ecotoxicity and performed an experiment for removing ZnO nanoparticles in water. Short-term exposure of hatching test using fertilized eggs (O. latipes) showed deformity in 5 ppm of ZnO nanoparticles solution, and in 10ppm ZnO nanoparticles solution delayed hatching was observed. Herein, chemical precipitation method was suggested for removing ZnO nanoparticles in water. The precipitated ZnO nanoparticles showed the form of ZnS after addition of Na2S, and the form of Zn3(PO4)2 for Na2HPO4. The removal efficiency of ZnO nanoparticles in water was closed to 100% for two case. In ecotoxicity evaluation of as-precipitated ZnS and Zn3(PO4)2, they did not cause any acute toxicity for D. magna. It is noted that this precipitation treatment of ZnO is effective to reduce the potential cytotoxicity.

Keywords: ZnO nanopraticles, ZnS, Zn3(PO4)2, ecotoxicity evaluation, chemical precipitation

Procedia PDF Downloads 278