Search results for: crow search algorithm
1970 Impact of Lobular Carcinoma in situ on Local Recurrence in Breast Cancer Treated with Breast Conservation Therapy: A Systematic Review and Meta-Analysis
Authors: Christopher G. Harris, Guy D. Eslick
Abstract:
Purpose: Lobular carcinoma in situ (LCIS) is a known risk factor for breast cancer of unclear significance when detected in association with invasive carcinoma. This meta-analysis aims to determine the impact of LCIS on local recurrence risk for individuals with breast cancer treated with breast conservation therapy to help guide appropriate treatment strategies. Methods: We identified relevant studies from five electronic databases. Studies were deemed suitable for inclusion where they compared patients with invasive breast cancer and concurrent LCIS to those with breast cancer alone, all patients underwent breast conservation therapy (lumpectomy with adjuvant radiation therapy), and local recurrence was evaluated. Recurrence data were pooled by use of a random effects model. Results: From 1488 citations screened by our search, 8 studies were deemed suitable for inclusion. These studies comprised of 908 cases and 10638 controls. Median follow-up time was 90 months. There was a significantly increased overall risk of local breast cancer recurrence for individuals with LCIS in association with breast cancer following breast conservation therapy [pOR 1.87; 95% CI 1.14-3.04; p = 0.012]. The risk of local recurrence was non-significantly increased at 5 [pOR 1.09; 95% CI 0.48-2.48; p = 0.828] and 10 years [pOR 1.90; 95% CI 0.89-4.06; p = 0.096]. Conclusions: Individuals with LCIS in association with invasive breast cancer have an increased risk of local recurrence following breast conservation therapy. This supports consideration of aggressive local control of LCIS by way of completion mastectomy or re-excision for certain high-risk patients.Keywords: breast cancer, breast conservation therapy, lobular carcinoma in situ, lobular neoplasia, local recurrence, meta-analysis
Procedia PDF Downloads 1601969 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 2921968 Message Passing Neural Network (MPNN) Approach to Multiphase Diffusion in Reservoirs for Well Interconnection Assessments
Authors: Margarita Mayoral-Villa, J. Klapp, L. Di G. Sigalotti, J. E. V. Guzmán
Abstract:
Automated learning techniques are widely applied in the energy sector to address challenging problems from a practical point of view. To this end, we discuss the implementation of a Message Passing algorithm (MPNN)within a Graph Neural Network(GNN)to leverage the neighborhood of a set of nodes during the aggregation process. This approach enables the characterization of multiphase diffusion processes in the reservoir, such that the flow paths underlying the interconnections between multiple wells may be inferred from previously available data on flow rates and bottomhole pressures. The results thus obtained compare favorably with the predictions produced by the Reduced Order Capacitance-Resistance Models (CRM) and suggest the potential of MPNNs to enhance the robustness of the forecasts while improving the computational efficiency.Keywords: multiphase diffusion, message passing neural network, well interconnection, interwell connectivity, graph neural network, capacitance-resistance models
Procedia PDF Downloads 1491967 Location Detection of Vehicular Accident Using Global Navigation Satellite Systems/Inertial Measurement Units Navigator
Authors: Neda Navidi, Rene Jr. Landry
Abstract:
Vehicle tracking and accident recognizing are considered by many industries like insurance and vehicle rental companies. The main goal of this paper is to detect the location of a car accident by combining different methods. The methods, which are considered in this paper, are Global Navigation Satellite Systems/Inertial Measurement Units (GNSS/IMU)-based navigation and vehicle accident detection algorithms. They are expressed by a set of raw measurements, which are obtained from a designed integrator black box using GNSS and inertial sensors. Another concern of this paper is the definition of accident detection algorithm based on its jerk to identify the position of that accident. In fact, the results convinced us that, even in GNSS blockage areas, the position of the accident could be detected by GNSS/INS integration with 50% improvement compared to GNSS stand alone.Keywords: driver behavior monitoring, integration, IMU, GNSS, monitoring, tracking
Procedia PDF Downloads 2341966 Packaging in the Design Synthesis of Novel Aircraft Configuration
Authors: Paul Okonkwo, Howard Smith
Abstract:
A study to estimate the size of the cabin and major aircraft components as well as detect and avoid interference between internally placed components and the external surface, during the conceptual design synthesis and optimisation to explore the design space of a BWB, was conducted. Sizing of components follows the Bradley cabin sizing and rubber engine scaling procedures to size the cabin and engine respectively. The interference detection and avoidance algorithm relies on the ability of the Class Shape Transform parameterisation technique to generate polynomial functions of the surfaces of a BWB aircraft configuration from the sizes of the cabin and internal objects using few variables. Interference detection is essential in packaging of non-conventional configuration like the BWB because of the non-uniform airfoil-shaped sections and resultant varying internal space. The unique configuration increases the need for a methodology to prevent objects from being placed in locations that do not sufficiently enclose them within the geometry.Keywords: packaging, optimisation, BWB, parameterisation, aircraft conceptual design
Procedia PDF Downloads 4631965 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics
Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca
Abstract:
The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.Keywords: adulteration, multivariate analysis, potential functions, regression
Procedia PDF Downloads 1251964 Settlement Prediction for Tehran Subway Line-3 via FLAC3D and ANFIS
Authors: S. A. Naeini, A. Khalili
Abstract:
Nowadays, tunnels with different applications are developed, and most of them are related to subway tunnels. The excavation of shallow tunnels that pass under municipal utilities is very important, and the surface settlement control is an important factor in the design. The study sought to analyze the settlement and also to find an appropriate model in order to predict the behavior of the tunnel in Tehran subway line-3. The displacement in these sections is also determined by using numerical analyses and numerical modeling. In addition, the Adaptive Neuro-Fuzzy Inference System (ANFIS) method is utilized by Hybrid training algorithm. The database pertinent to the optimum network was obtained from 46 subway tunnels in Iran and Turkey which have been constructed by the new Austrian tunneling method (NATM) with similar parameters based on type of their soil. The surface settlement was measured, and the acquired results were compared to the predicted values. The results disclosed that computing intelligence is a good substitute for numerical modeling.Keywords: settlement, Subway Line, FLAC3D, ANFIS Method
Procedia PDF Downloads 2331963 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption
Authors: Waziri Victor Onomza, John K. Alhassan, Idris Ismaila, Noel Dogonyaro Moses
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute theoretical presentations in high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.Keywords: big data analytics, security, privacy, bootstrapping, homomorphic, homomorphic encryption scheme
Procedia PDF Downloads 3801962 Ensemble-Based SVM Classification Approach for miRNA Prediction
Authors: Sondos M. Hammad, Sherin M. ElGokhy, Mahmoud M. Fahmy, Elsayed A. Sallam
Abstract:
In this paper, an ensemble-based Support Vector Machine (SVM) classification approach is proposed. It is used for miRNA prediction. Three problems, commonly associated with previous approaches, are alleviated. These problems arise due to impose assumptions on the secondary structural of premiRNA, imbalance between the numbers of the laboratory checked miRNAs and the pseudo-hairpins, and finally using a training data set that does not consider all the varieties of samples in different species. We aggregate the predicted outputs of three well-known SVM classifiers; namely, Triplet-SVM, Virgo and Mirident, weighted by their variant features without any structural assumptions. An additional SVM layer is used in aggregating the final output. The proposed approach is trained and then tested with balanced data sets. The results of the proposed approach outperform the three base classifiers. Improved values for the metrics of 88.88% f-score, 92.73% accuracy, 90.64% precision, 96.64% specificity, 87.2% sensitivity, and the area under the ROC curve is 0.91 are achieved.Keywords: MiRNAs, SVM classification, ensemble algorithm, assumption problem, imbalance data
Procedia PDF Downloads 3491961 Query Task Modulator: A Computerized Experimentation System to Study Media-Multitasking Behavior
Authors: Premjit K. Sanjram, Gagan Jakhotiya, Apoorv Goyal, Shanu Shukla
Abstract:
In psychological research, laboratory experiments often face the trade-off issue between experimental control and mundane realism. With the advent of Immersive Virtual Environment Technology (IVET), this issue seems to be at bay. However there is a growing challenge within the IVET itself to design and develop system or software that captures the psychological phenomenon of everyday lives. One such phenomena that is of growing interest is ‘media-multitasking’ To aid laboratory researches in media-multitasking this paper introduces Query Task Modulator (QTM), a computerized experimentation system to study media-multitasking behavior in a controlled laboratory environment. The system provides a computerized platform in conducting an experiment for experimenters to study media-multitasking in which participants will be involved in a query task. The system has Instant Messaging, E-mail, and Voice Call features. The answers to queries are provided on the left hand side information panel where participants have to search for it and feed the information in the respective communication media blocks as fast as possible. On the whole the system will collect multitasking behavioral data. To analyze performance there is a separate output table that records the reaction times and responses of the participants individually. Information panel and all the media blocks will appear on a single window in order to ensure multi-modality feature in media-multitasking and equal emphasis on all the tasks (thus avoiding prioritization to a particular task). The paper discusses the development of QTM in the light of current techniques of studying media-multitasking.Keywords: experimentation system, human performance, media-multitasking, query-task
Procedia PDF Downloads 5571960 Designing of Content Management Systems (CMS) for Web Development
Authors: Abdul Basit Kiani, Maryam Kiani
Abstract:
Content Management Systems (CMS) have transformed the landscape of web development by providing an accessible and efficient platform for creating and managing digital content. This abstract explores the key features and benefits of CMS in web development, highlighting its impact on website creation and maintenance. CMS offers a user-friendly interface that empowers individuals to create, edit, and publish content without requiring extensive technical knowledge. With customizable templates and themes, users can personalize the design and layout of their websites, ensuring a visually appealing online presence. Furthermore, CMS facilitates efficient content organization through categorization and tagging, enabling visitors to navigate and search for information effortlessly. It also supports version control, allowing users to track and manage revisions effectively. Scalability is a notable advantage of CMS, as it offers a wide range of plugins and extensions to integrate additional features into websites. From e-commerce functionality to social media integration, CMS adapts to evolving business needs. Additionally, CMS enhances collaborative workflows by allowing multiple user roles and permissions. This enables teams to collaborate effectively on content creation and management, streamlining processes and ensuring smooth coordination. In conclusion, CMS serves as a powerful tool in web development, simplifying content creation, customization, organization, scalability, and collaboration. With CMS, individuals and businesses can create dynamic and engaging websites, establishing a strong online presence with ease.Keywords: web development, content management systems, information technology, programming
Procedia PDF Downloads 851959 Artificial Steady-State-Based Nonlinear MPC for Wheeled Mobile Robot
Authors: M. H. Korayem, Sh. Ameri, N. Yousefi Lademakhi
Abstract:
To ensure the stability of closed-loop nonlinear model predictive control (NMPC) within a finite horizon, there is a need for appropriate design terminal ingredients, which can be a time-consuming and challenging effort. Otherwise, in order to ensure the stability of the control system, it is necessary to consider an infinite predictive horizon. Increasing the prediction horizon increases computational demand and slows down the implementation of the method. In this study, a new technique has been proposed to ensure system stability without terminal ingredients. This technique has been employed in the design of the NMPC algorithm, leading to a reduction in the computational complexity of designing terminal ingredients and computational burden. The studied system is a wheeled mobile robot (WMR) subjected to non-holonomic constraints. Simulation has been investigated for two problems: trajectory tracking and adjustment mode.Keywords: wheeled mobile robot, nonlinear model predictive control, stability, without terminal ingredients
Procedia PDF Downloads 911958 Multiscale Modelization of Multilayered Bi-Dimensional Soils
Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets
Procedia PDF Downloads 1251957 Importance of Flexibility Training for Older Adults: A Narrative Review
Authors: Andrej Kocjan
Abstract:
Introduction: Mobility has been shown to play an important role of health and quality of life among older adults. Falls, which are often related to decreased mobility, as well as to neuromuscular deficits, represent the most common injury among older adults. Fall risk has been shown to increase with reduced lower extremity flexibility. The aim of the paper is to assess the importance of flexibility training on joint range of motion and functional performance among elderly population. Methods: We performed literature research on PubMed and evaluated articles published until 2000. The articles found in the search strategy were also added. The population of interest included older adults (≥ 65 years of age). Results: Flexibility training programs still represent an important part of several rehabilitation programs. Static stretching and proprioceptive neuromuscular facilitation are the most frequently used techniques to improve the length of the muscle-tendon complex. Although the effectiveness of type of stretching seems to be related to age and gender, static stretching is a more appropriate technique to enhance shoulder, hip, and ankle range of motion in older adults. Stretching should be performed in multiple sets with holds of more than 60 seconds for a single muscle group. Conclusion: The literature suggests that flexibility training is an effective method to increase joint range of motion in older adults. In the light of increased functional outcome, activities such as strengthening, balance, and aerobic exercises should be incorporated into a training program for older people. Due to relatively little published literature, it is still not possible to prescribe detailed recommendations regarding flexibility training for older adults.Keywords: elderly, exercise, flexibility, falls
Procedia PDF Downloads 1861956 A Survey on Lossless Compression of Bayer Color Filter Array Images
Authors: Alina Trifan, António J. R. Neves
Abstract:
Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.Keywords: bayer image, CFA, lossless compression, image coding standards
Procedia PDF Downloads 3211955 CMPD: Cancer Mutant Proteome Database
Authors: Po-Jung Huang, Chi-Ching Lee, Bertrand Chin-Ming Tan, Yuan-Ming Yeh, Julie Lichieh Chu, Tin-Wen Chen, Cheng-Yang Lee, Ruei-Chi Gan, Hsuan Liu, Petrus Tang
Abstract:
Whole-exome sequencing focuses on the protein coding regions of disease/cancer associated genes based on a priori knowledge is the most cost-effective method to study the association between genetic alterations and disease. Recent advances in high throughput sequencing technologies and proteomic techniques has provided an opportunity to integrate genomics and proteomics, allowing readily detectable mutated peptides corresponding to mutated genes. Since sequence database search is the most widely used method for protein identification using Mass spectrometry (MS)-based proteomics technology, a mutant proteome database is required to better approximate the real protein pool to improve disease-associated mutated protein identification. Large-scale whole exome/genome sequencing studies were launched by National Cancer Institute (NCI), Broad Institute, and The Cancer Genome Atlas (TCGA), which provide not only a comprehensive report on the analysis of coding variants in diverse samples cell lines but a invaluable resource for extensive research community. No existing database is available for the collection of mutant protein sequences related to the identified variants in these studies. CMPD is designed to address this issue, serving as a bridge between genomic data and proteomic studies and focusing on protein sequence-altering variations originated from both germline and cancer-associated somatic variations.Keywords: TCGA, cancer, mutant, proteome
Procedia PDF Downloads 5931954 Investigation of Overarching Effects of Artificial Intelligence Implementation into Education Through Research Synthesis
Authors: Justin Bin
Abstract:
Artificial intelligence (AI) has been rapidly rising in usage recently, already active in the daily lives of millions, from distinguished AIs like the popular ChatGPT or Siri to more obscure, inconspicuous AIs like those used in social media or internet search engines. As upcoming generations grow immersed in emerging technology, AI will play a vital role in their development. Namely, the education sector, an influential portion of a person’s early life as a student, faces a vast ocean of possibilities concerning the implementation of AI. The main purpose of this study is to analyze the effect that AI will have on the future of the educational field. More particularly, this study delves deeper into the following three categories: school admissions, the productivity of students, and ethical concerns (role of human teachers, purpose of schooling itself, and significance of diplomas). This study synthesizes research and data on the current effects of AI on education from various published literature sources and journals, as well as estimates on further AI potential, in order to determine the main, overarching effects it will have on the future of education. For this study, a systematic organization of data in terms of type (quantitative vs. qualitative), the magnitude of effect implicated, and other similar factors were implemented within each area of significance. The results of the study suggest that AI stands to change all the beforementioned subgroups. However, its specific effects vary in magnitude and favorability (beneficial or harmful) and will be further discussed. The results discussed will reveal to those affiliated with the education field, such as teachers, counselors, or even parents of students, valuable information on not just the projected possibilities of AI in education but the effects of those changes moving forward.Keywords: artificial intelligence, education, schools, teachers
Procedia PDF Downloads 5221953 Determining Earthquake Performances of Existing Reinforced Concrete Buildings by Using ANN
Authors: Musa H. Arslan, Murat Ceylan, Tayfun Koyuncu
Abstract:
In this study, an artificial intelligence-based (ANN based) analytical method has been developed for analyzing earthquake performances of the reinforced concrete (RC) buildings. 66 RC buildings with four to ten storeys were subjected to performance analysis according to the parameters which are the existing material, loading and geometrical characteristics of the buildings. The selected parameters have been thought to be effective on the performance of RC buildings. In the performance analyses stage of the study, level of performance possible to be shown by these buildings in case of an earthquake was determined on the basis of the 4-grade performance levels specified in Turkish Earthquake Code- 2007 (TEC-2007). After obtaining the 4-grade performance level, selected 23 parameters of each building have been matched with the performance level. In this stage, ANN-based fast evaluation algorithm mentioned above made an economic and rapid evaluation of four to ten storey RC buildings. According to the study, the prediction accuracy of ANN has been found about 74%.Keywords: artificial intelligence, earthquake, performance, reinforced concrete
Procedia PDF Downloads 4631952 Psychosocial Determinants of Quality of Life After Treatment for Breast Cancer - A Systematic Review
Authors: Lakmali Anthony, Madeline Gillies
Abstract:
Purpose: Decreasing mortality has led to increased focus on patient-reported outcomes such as quality of life (QoL) in breast cancer. Breast cancer patients often have decreased QoL even after treatment is complete. This systematic review of the literature aims to identify psychosocial factors associated with decreased QoL in post-treatment breast cancer patients. Methodology: This systematic review was performed in accordance with the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses recommendations. The search was conducted in MEDLINE, EMBASE, and PsychINFO using MeSH headings. The two authors screened studies for relevance and extracted data. Results: Seventeen studies were identified, including 3,150 total participants (mean = 197) with a mean age of 51.9 years. There was substantial heterogeneity in measures of QoL. The most common was the European Organisation for Research and Treatment of Cancer QLQ-C30 (n=7, 41.1%). Most studies (n=12, 70.5%) found that emotional distress correlated with poor QoL, while 3 found no significant association. The most common measure of emotional distress was the Hospital Anxiety and Depression Scale (n=12, 70.5%). Other psychosocial factors associated with QoL were unmet needs, problematic social support, and negative affect. Clinicopathologic determinants included mastectomy without reconstruction, stage IV disease, and adjuvant chemotherapy. Conclusion: This systematic review provides a summary of the psychosocial determinants of poor QoL in post-treatment breast cancer patients, as well as the most commonly reported measures of these. An understanding of these potentially modifiable determinants of poor outcome is pivotal to the provision of quality, patient-centred care in surgical oncology.Keywords: breast cancer, quality of life, psychosocial determinants, cancer surgery
Procedia PDF Downloads 771951 An Experimental Testbed Using Virtual Containers for Distributed Systems
Authors: Parth Patel, Ying Zhu
Abstract:
Distributed systems have become ubiquitous, and they continue their growth through a range of services. With advances in resource virtualization technology such as Virtual Machines (VM) and software containers, developers no longer require high-end servers to test and develop distributed software. Even in commercial production, virtualization has streamlined the process of rapid deployment and service management. This paper introduces a distributed systems testbed that utilizes virtualization to enable distributed systems development on commodity computers. The testbed can be used to develop new services, implement theoretical distributed systems concepts for understanding, and experiment with virtual network topologies. We show its versatility through two case studies that utilize the testbed for implementing a theoretical algorithm and developing our own methodology to find high-risk edges. The results of using the testbed for these use cases have proven the effectiveness and versatility of this testbed across a range of scenarios.Keywords: distributed systems, experimental testbed, peer-to-peer networks, virtual container technology
Procedia PDF Downloads 1461950 Simulation of Wave Propagation in Multiphase Medium
Authors: Edip Kemal, Sheshov Vlatko, Bojadjieva Julijana, Bogdanovic ALeksandra, Gjorgjeska Irena
Abstract:
The wave propagation phenomenon in porous domains is of great importance in the field of geotechnical earthquake engineering. In these kinds of problems, the elastic waves propagate from the interior to the exterior domain and require special treatment at the computational level since apart from displacement in the solid-state there is a p-wave that takes place in the pore water phase. In this paper, a study on the implementation of multiphase finite elements is presented. The proposed algorithm is implemented in the ANSYS finite element software and tested on one-dimensional wave propagation considering both pore pressure wave propagation and displacement fields. In the simulation of porous media such as soils, the behavior is governed largely by the interaction of the solid skeleton with water and/or air in the pores. Therefore, coupled problems of fluid flow and deformation of the solid skeleton are considered in a detailed way.Keywords: wave propagation, multiphase model, numerical methods, finite element method
Procedia PDF Downloads 1641949 Applying Spanning Tree Graph Theory for Automatic Database Normalization
Authors: Chetneti Srisa-an
Abstract:
In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies.Keywords: relational database, functional dependency, automatic normalization, primary key, spanning tree
Procedia PDF Downloads 3531948 The Development of GPS Buoy for Ocean Surface Monitoring: Initial Results
Authors: Anuar Mohd Salleh, Mohd Effendi Daud
Abstract:
This study presents a kinematic positioning approach which is use the GPS buoy for precise ocean surface monitoring. A GPS buoy data from two experiments have been processed using a precise, medium-range differential kinematic technique. In each case the data were collected for more than 24 hours at nearby coastal site at a high rate (1 Hz), along with measurements from neighboring tidal stations, to verify the estimated sea surface heights. Kinematic coordinates of GPS buoy were estimated using the epoch-wise pre-elimination and the backward substitution algorithm. Test results show the centimeter level accuracy in sea surface height determination can be successfully achieved using proposed technique. The centimeter level agreement between two methods also suggests the possibility of using this inexpensive and more flexible GPS buoy equipment to enhance (or even replace) the current use of tidal gauge stations.Keywords: global positioning system, kinematic GPS, sea surface height, GPS buoy, tide gauge
Procedia PDF Downloads 5451947 Determination of Frequency Relay Setting during Distributed Generators Islanding
Authors: Tarek Kandil, Ameen Ali
Abstract:
Distributed generation (DG) has recently gained a lot of momentum in power industry due to market deregulation and environmental concerns. One of the most technical challenges facing DGs is islanding of distributed generators. The current industry practice is to disconnect all distributed generators immediately after the occurrence of islands within 200 to 350 ms after loss of main supply. To achieve such goal, each DG must be equipped with an islanding detection device. Frequency relays are one of the most commonly used loss of mains detection method. However, distribution utilities may be faced with concerns related to false operation of these frequency relays due to improper settings. The commercially available frequency relays are considering standard tight setting. This paper investigates some factors related to relays internal algorithm that contribute to their different operating responses. Further, the relay operation in the presence of multiple distributed at the same network is analyzed. Finally, the relay setting can be accurately determined based on these investigation and analysis.Keywords: frequency relay, distributed generation, islanding detection, relay setting
Procedia PDF Downloads 5341946 Optimal Design of Reference Node Placement for Wireless Indoor Positioning Systems in Multi-Floor Building
Authors: Kittipob Kondee, Chutima Prommak
Abstract:
In this paper, we propose an optimization technique that can be used to optimize the placements of reference nodes and improve the location determination performance for the multi-floor building. The proposed technique is based on Simulated Annealing algorithm (SA) and is called MSMR-M. The performance study in this work is based on simulation. We compare other node-placement techniques found in the literature with the optimal node-placement solutions obtained from our optimization. The results show that using the optimal node-placement obtained by our proposed technique can improve the positioning error distances up to 20% better than those of the other techniques. The proposed technique can provide an average error distance within 1.42 meters.Keywords: indoor positioning system, optimization system design, multi-floor building, wireless sensor networks
Procedia PDF Downloads 2461945 Efficient Ground Targets Detection Using Compressive Sensing in Ground-Based Synthetic-Aperture Radar (SAR) Images
Authors: Gherbi Nabil
Abstract:
Detection of ground targets in SAR radar images is an important area for radar information processing. In the literature, various algorithms have been discussed in this context. However, most of them are of low robustness and accuracy. To this end, we discuss target detection in SAR images based on compressive sensing. Firstly, traditional SAR image target detection algorithms are discussed, and their limitations are highlighted. Secondly, a compressive sensing method is proposed based on the sparsity of SAR images. Next, the detection problem is solved using Multiple Measurements Vector configuration. Furthermore, a robust Alternating Direction Method of Multipliers (ADMM) is developed to solve the optimization problem. Finally, the detection results obtained using raw complex data are presented. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm.Keywords: compressive sensing, raw complex data, synthetic aperture radar, ADMM
Procedia PDF Downloads 211944 Analysis of Key Factors Influencing Muslim Women’s Buying Intentions of Clothes: A Study of UK’s Ethnic Minorities and Modest Fashion Industry
Authors: Nargis Ali
Abstract:
Since the modest fashion market is growing in the UK, there is still little understanding and more concerns found among researchers and marketers about Muslim consumers. Therefore, the present study is designed to explore critical factors influencing Muslim women’s intention to purchase clothing and to identify the differences in the purchase intention of ethnic minority groups in the UK. The conceptual framework is designed using the theory of planned behavior and social identity theory. In order to satisfy the research objectives, a structured online questionnaire was published on Facebook from 20 November to 21 March. As a result, 1087 usable questionnaires were received and used to assess the proposed model fit through structural equation modeling. Results revealed that social media does influence the purchase intention of Muslim women. Muslim women search for stylish clothes that provide comfort during summer while they prefer soft and subdued colors. Furthermore, religious knowledge and religious practice, and fashion uniqueness strongly influence their purchase intention, while hybrid identity is negatively related to the purchase intention of Muslim women. This research contributes to the literature linked to Muslim consumers at a time when the UK's large retailers were seeking to attract Muslim consumers through modestly designed outfits. Besides, it will be helpful to formulate or revise product and marketing strategies according to UK’s Muslim women’s tastes and needs.Keywords: fashion uniqueness, hybrid identity, religiosity, social media, social identity theory, structural equation modeling, theory of planned behavior
Procedia PDF Downloads 2271943 Improving Our Understanding of the in vivo Modelling of Psychotic Disorders
Authors: Zsanett Bahor, Cristina Nunes-Fonseca, Gillian L. Currie, Emily S. Sena, Lindsay D.G. Thomson, Malcolm R. Macleod
Abstract:
Psychosis is ranked as the third most disabling medical condition in the world by the World Health Organization. Despite a substantial amount of research in recent years, available treatments are not universally effective and have a wide range of adverse side effects. Since many clinical drug candidates are identified through in vivo modelling, a deeper understanding of these models, and their strengths and limitations, might help us understand reasons for difficulties in psychosis drug development. To provide an unbiased summary of the preclinical psychosis literature we performed a systematic electronic search of PubMed for publications modelling a psychotic disorder in vivo, identifying 14,721 relevant studies. Double screening of 11,000 publications from this dataset so far established 2403 animal studies of psychosis, with the most common model being schizophrenia (95%). 61% of these models are induced using pharmacological agents. For all the models only 56% of publications test a therapeutic treatment. We propose a systematic review of these studies to assess the prevalence of reporting of measures to reduce risk of bias, and a meta-analysis to assess the internal and external validity of these animal models. Our findings are likely to be relevant to future preclinical studies of psychosis as this generation of strong empirical evidence has the potential to identify weaknesses, areas for improvement and make suggestions on refinement of experimental design. Such a detailed understanding of the data which inform what we think we know will help improve the current attrition rate between bench and bedside in psychosis research.Keywords: animal models, psychosis, systematic review, schizophrenia
Procedia PDF Downloads 2901942 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow
Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun
Abstract:
With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.Keywords: cloud storage security, sharing storage, attributes, Hash algorithm
Procedia PDF Downloads 3901941 Estimation of Structural Parameters in Time Domain Using One Dimensional Piezo Zirconium Titanium Patch Model
Authors: N. Jinesh, K. Shankar
Abstract:
This article presents a method of using the one dimensional piezo-electric patch on beam model for structural identification. A hybrid element constituted of one dimensional beam element and a PZT sensor is used with reduced material properties. This model is convenient and simple for identification of beams. Accuracy of this element is first verified against a corresponding 3D finite element model (FEM). The structural identification is carried out as an inverse problem whereby parameters are identified by minimizing the deviation between the predicted and measured voltage response of the patch, when subjected to excitation. A non-classical optimization algorithm Particle Swarm Optimization is used to minimize this objective function. The signals are polluted with 5% Gaussian noise to simulate experimental noise. The proposed method is applied on beam structure and identified parameters are stiffness and damping. The model is also validated experimentally.Keywords: inverse problem, particle swarm optimization, PZT patches, structural identification
Procedia PDF Downloads 309