Search results for: network coding signature
3058 Digital Forensic Exploration Framework for Email and Instant Messaging Applications
Authors: T. Manesh, Abdalla A. Alameen, M. Mohemmed Sha, A. Mohamed Mustaq Ahmed
Abstract:
Email and instant messaging applications are foremost and extensively used electronic communication methods in this era of information explosion. These applications are generally used for exchange of information using several frontend applications from various service providers by its users. Almost all such communications are now secured using SSL or TLS security over HTTP communication. At the same time, it is also noted that cyber criminals and terrorists have started exchanging information using these methods. Since communication is encrypted end-to-end, tracing significant forensic details and actual content of messages are found to be unattended and severe challenges by available forensic tools. These challenges seriously affect in procuring substantial evidences against such criminals from their working environments. This paper presents a vibrant forensic exploration and architectural framework which not only decrypts any communication or network session but also reconstructs actual message contents of email as well as instant messaging applications. The framework can be effectively used in proxy servers and individual computers and it aims to perform forensic reconstruction followed by analysis of webmail and ICQ messaging applications. This forensic framework exhibits a versatile nature as it is equipped with high speed packet capturing hardware, a well-designed packet manipulating algorithm. It regenerates message contents over regular as well as SSL encrypted SMTP, POP3 and IMAP protocols and catalyzes forensic presentation procedure for prosecution of cyber criminals by producing solid evidences of their actual communication as per court of law of specific countries.Keywords: forensics, network sessions, packet reconstruction, packet reordering
Procedia PDF Downloads 3473057 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 663056 Cybersecurity Strategies for Protecting Oil and Gas Industrial Control Systems
Authors: Gaurav Kumar Sinha
Abstract:
The oil and gas industry is a critical component of the global economy, relying heavily on industrial control systems (ICS) to manage and monitor operations. However, these systems are increasingly becoming targets for cyber-attacks, posing significant risks to operational continuity, safety, and environmental integrity. This paper explores comprehensive cybersecurity strategies for protecting oil and gas industrial control systems. It delves into the unique vulnerabilities of ICS in this sector, including outdated legacy systems, integration with IT networks, and the increased connectivity brought by the Industrial Internet of Things (IIoT). We propose a multi-layered defense approach that includes the implementation of robust network security protocols, regular system updates and patch management, advanced threat detection and response mechanisms, and stringent access control measures. We illustrate the effectiveness of these strategies in mitigating cyber risks and ensuring the resilient and secure operation of oil and gas industrial control systems. The findings underscore the necessity for a proactive and adaptive cybersecurity framework to safeguard critical infrastructure in the face of evolving cyber threats.Keywords: cybersecurity, industrial control systems, oil and gas, cyber-attacks, network security, IoT, threat detection, system updates, patch management, access control, cybersecurity awareness, critical infrastructure, resilience, cyber threats, legacy systems, IT integration, multi-layered defense, operational continuity, safety, environmental integrity
Procedia PDF Downloads 533055 Resilience with Spontaneous Volunteers in Disasters-Coordination Using an It System
Authors: Leo Latasch, Mario Di Gennaro
Abstract:
Introduction: The goal of this project was to increase the resilience of the population as well as rescue organizations to make both quality and time-related improvements in handling crises. A helper network was created for this purpose. Methods: Social questions regarding the structure and purpose of helper networks were considered - specifically with regard to helper motivation, the level of commitment and collaboration between populations and agencies. The exchange of information, the coordinated use of volunteers, and the distribution of available resources will be ensured through defined communication and cooperation routines. Helper smartphones will also be used provide a picture of the situation on the ground. Results: The helper network was established and deployed based on the RESIBES information technology system. It consists of a service platform, a web portal and a smartphone app. The service platform is the central element for collaboration between the various rescue organizations, as well as for persons, associations, and companies from the population offering voluntary aid. The platform was used for: Registering helpers and resources and then requesting and assigning it in case of a disaster. These services allow the population's resources to be organized. The service platform also allows for a secure data exchange between services and external systems. Conclusions: The social and technical work priorities have allowed us to cover a full cycle of advance structural work, gaining an overview, damage management, evaluation, and feedback on experiences. This cycle allows experiences gained while handling the crisis to feed back into the cycle and improve preparations and management strategies.Keywords: coordination, disaster, resilience, volunteers
Procedia PDF Downloads 1463054 A Comparative Study on Deep Learning Models for Pneumonia Detection
Authors: Hichem Sassi
Abstract:
Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.Keywords: deep learning, computer vision, pneumonia, models, comparative study
Procedia PDF Downloads 663053 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 2263052 Identification of Common Indicators of Family Environment of Pupils of Alternative Schools
Authors: Yveta Pohnětalová, Veronika Nováková, Lucie Hrašová
Abstract:
The paper presents the results of research in which we were looking for common characteristics of the family environment of students alternative and innovative education systems. Topicality comes from the fact that nowadays in the Czech Republic there are several civic and parental initiatives held with the aim to establish schools for their children. The goal of our research was to reveal key aspects of these families and to identify their common indicators. Among other things, we were interested what reasons lead parents to decide to enroll their child into different education than standard (common). The survey was qualitative and there were eighteen respondents of parents of alternative schools´ pupils. The reason to implement qualitative design was the opportunity to gain deeper insight into the essence of phenomena and to obtain detailed information, which would become the basis for subsequent quantitative research. There have been semi structured interviews done with the respondents which had been recorded and transcribed. By an analysis of gained data (categorization and by coding), we found out that common indicator of our respondents is higher education and higher economic level. This issue should be at the forefront of the researches because there is lack of analysis which would provide a comparison of common and alternative schools in the Czech Republic especially with regard to quality of education. Based on results, we consider questions whether approaches of these parents towards standard education come from their own experience or from the lack of knowledge of current goals and objectives of education policy of the Czech Republic.Keywords: alternative schools, family environment, quality of education, parents´ approach
Procedia PDF Downloads 3533051 Scrutiny and Solving Analytically Nonlinear Differential at Engineering Field of Fluids, Heat, Mass and Wave by New Method AGM
Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili
Abstract:
As all experts know most of engineering system behavior in practical are nonlinear process (especially heat, fluid and mass, etc.) and analytical solving (no numeric) these problems are difficult, complex and sometimes impossible like (fluids and gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure a innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will be emerged after comparing the achieved solutions by Numerical method (Runge-Kutte 4th) and so compare to other methods such as HPM, ADM,… and exact solutions. Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students (engineering and basic science) in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear differential equations, with help of that there is no difficulty for solving nonlinear differential equations(ODE and PDE). In this paper, we investigate and solve 4 types of the nonlinear differential equation with AGM method : 1-Heat and fluid, 2-Unsteady state of nonlinear partial differential, 3-Coupled nonlinear partial differential in wave equation, and 4-Nonlinear integro-differential equation.Keywords: new method AGM, sets of coupled nonlinear equations at engineering field, waves equations, integro-differential, fluid and thermal
Procedia PDF Downloads 5503050 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite
Abstract:
Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination
Procedia PDF Downloads 1273049 A Multidimensional Genetic Algorithm Applicable for Our VRP Variant Dealing with the Problems of Infrastructure Defaults SVRDP-CMTW: “Safety Vehicle Routing Diagnosis Problem with Control and Modified Time Windows”
Authors: Ben Mansour Mouin, Elloumi Abdelkarim
Abstract:
We will discuss the problem of routing a fleet of different vehicles from a central depot to different types of infrastructure-defaults with dynamic maintenance requests, modified time windows, and control of default maintained. For this reason, we propose a modified metaheuristicto to solve our mathematical model. SVRDP-CMTW is a variant VRP of an optimal vehicle plan that facilitates the maintenance task of different types of infrastructure-defaults. This task will be monitored after the maintenance, based on its priorities, the degree of danger associated with each default, and the neighborhood at the black-spots. We will present, in this paper, a multidimensional genetic algorithm “MGA” by detailing its characteristics, proposed mechanisms, and roles in our work. The coding of this algorithm represents the necessary parameters that characterize each infrastructure-default with the objective of minimizing a combination of cost, distance and maintenance times while satisfying the priority levels of the most urgent defaults. The developed algorithm will allow the dynamic integration of newly detected defaults at the execution time. This result will be displayed in our programmed interactive system at the routing time. This multidimensional genetic algorithm replaces N genetic algorithm to solve P different type problems of infrastructure defaults (instead of N algorithm for P problem we can solve in one multidimensional algorithm simultaneously who can solve all these problemsatonce).Keywords: mathematical model, VRP, multidimensional genetic algorithm, metaheuristics
Procedia PDF Downloads 2013048 Nucleotide Diversity and Bacterial Endosymbionts of the Black Cherry Aphid Myzus cerasi (Fabricus, 1775) (Hemiptera: Aphididae) from Turkey
Authors: Burcu Inal, Irfan Kandemir
Abstract:
Sequences of mitochondrial cytochrome oxidase I (COI) gene of twenty-five Turkish and one Greek Myzus cerasi (Fabricus) (Hemiptera: Aphididae) in populations were collected from Prunus avium and Prunus cerasus. The partial coding region of COI studied is 605 bp for all the populations, from which 565 nucleotides were conserved, 40 were variable, 37 were singleton, and 3 sites were parsimony-informative. Four haplotypes were identified based on nucleotide substitutions, and the mean of intraspecific divergence was calculated to be 0.3%. Phylogenetic trees were constructed using Maximum Likelihood, Minimum Evolution, Neighbor-joining, and Unweighed Pair Group Method of Arithmetic Averages (UPGMA) and Myzus persicae (Sulzer) and Myzus borealis Ossiannilson were included as outgroups. The population of M. cerasi from Isparta diverged from the rest of the groups and formed a clade (Haplotype B) with Myzus borealis. The rest of the haplotype diversity includes Haplotype A and Haplotype C with individuals characterized as Myzus cerasi pruniavium and Haplotype D with Myzus cerasi cerasi. M. cerasi diverge into two subspecies and it must be reevaluated whether this pest is monophagous or oligophagous in terms of plant type dependence. The obligated endosymbiont Buchnera aphidicola was also found during this research, but no facultative symbionts could be found. It is expected further studies will be required for a complete barcoding and diversity of bacterial endosymbionts present.Keywords: bacterial endosymbionts, barcoding, black cherry aphid, nucleotide diversity
Procedia PDF Downloads 1763047 Network Analysis of Genes Involved in the Biosynthesis of Medicinally Important Naphthodianthrone Derivatives of Hypericum perforatum
Authors: Nafiseh Noormohammadi, Ahmad Sobhani Najafabadi
Abstract:
Hypericins (hypericin and pseudohypericin) are natural napthodianthrone derivatives produced by Hypericum perforatum (St. John’s Wort), which have many medicinal properties such as antitumor, antineoplastic, antiviral, and antidepressant activities. Production and accumulation of hypericin in the plant are influenced by both genetic and environmental conditions. Despite the existence of different high-throughput data on the plant, genetic dimensions of hypericin biosynthesis have not yet been completely understood. In this research, 21 high-quality RNA-seq data on different parts of the plant were integrated into metabolic data to reconstruct a coexpression network. Results showed that a cluster of 30 transcripts was correlated with total hypericin. The identified transcripts were divided into three main groups based on their functions, including hypericin biosynthesis genes, transporters, detoxification genes, and transcription factors (TFs). In the biosynthetic group, different isoforms of polyketide synthase (PKSs) and phenolic oxidative coupling proteins (POCPs) were identified. Phylogenetic analysis of protein sequences integrated into gene expression analysis showed that some of the POCPs seem to be very important in the biosynthetic pathway of hypericin. In the TFs group, six TFs were correlated with total hypericin. qPCR analysis of these six TFs confirmed that three of them were highly correlated. The identified genes in this research are a rich resource for further studies on the molecular breeding of H. perforatum in order to obtain varieties with high hypericin production.Keywords: hypericin, St. John’s Wort, data mining, transcription factors, secondary metabolites
Procedia PDF Downloads 963046 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case
Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang
Abstract:
In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination
Procedia PDF Downloads 943045 Branding and Posting Strategy on Facebook Pages of Higher Education Institutions in Ontario, Canada in 2019-2020: A Quantitative and Qualitative Investigation
Authors: Mai To
Abstract:
Higher education institutions (HEIs) in Ontario, Canada have invested in social media presence for multiple purposes, such as branding, student’ engagement, and recruitment. To have a full picture of the social media strategy implemented by HEIs in Ontario, Canada, this study used a mixed-method approach to analyze Facebook posts’ characteristics and content. A total of 1789 Facebook posts from September 2019 to April 2020 of six selected HEIs were collected for analysis and coding based on five pre-determined branding positions: Elite, Nurturing, Campus, Outcome, and Commodity. Besides, the study also calculated the engagement rate for each social media practice to measure its effectiveness. The results show that there were not many differences in practices such as posting frequency, length, types, and timing among HEIs. However, the distribution of branding positions and content targeting future students versus current students was varied, although the HEIs employed all five branding positions and targeted the same lists of audiences. Some practices such as evening post for colleges and nurturing branding for universities attracted significantly higher engagement. This study provides a review of current social media practices and branding strategy, as well as informs the practices that can better engage the audiences.Keywords: branding, higher education, social media, student engagement, student recruitment
Procedia PDF Downloads 1303044 FACTS Based Stabilization for Smart Grid Applications
Authors: Adel. M. Sharaf, Foad H. Gandoman
Abstract:
Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PV-hybrid-Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid-Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6-pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.Keywords: AC FACTS, smart grid, stabilization, PV-battery storage, Switched Filter-Compensation (SFC)
Procedia PDF Downloads 4143043 A Pipeline for Detecting Copy Number Variation from Whole Exome Sequencing Using Comprehensive Tools
Authors: Cheng-Yang Lee, Petrus Tang, Tzu-Hao Chang
Abstract:
Copy number variations (CNVs) have played an important role in many kinds of human diseases, such as Autism, Schizophrenia and a number of cancers. Many diseases are found in genome coding regions and whole exome sequencing (WES) is a cost-effective and powerful technology in detecting variants that are enriched in exons and have potential applications in clinical setting. Although several algorithms have been developed to detect CNVs using WES and compared with other algorithms for finding the most suitable methods using their own samples, there were not consistent datasets across most of algorithms to evaluate the ability of CNV detection. On the other hand, most of algorithms is using command line interface that may greatly limit the analysis capability of many laboratories. We create a series of simulated WES datasets from UCSC hg19 chromosome 22, and then evaluate the CNV detective ability of 19 algorithms from OMICtools database using our simulated WES datasets. We compute the sensitivity, specificity and accuracy in each algorithm for validation of the exome-derived CNVs. After comparison of 19 algorithms from OMICtools database, we construct a platform to install all of the algorithms in a virtual machine like VirtualBox which can be established conveniently in local computers, and then create a simple script that can be easily to use for detecting CNVs using algorithms selected by users. We also build a table to elaborate on many kinds of events, such as input requirement, CNV detective ability, for all of the algorithms that can provide users a specification to choose optimum algorithms.Keywords: whole exome sequencing, copy number variations, omictools, pipeline
Procedia PDF Downloads 3233042 Understanding Tourism Innovation through Fuzzy Measures
Authors: Marcella De Filippo, Delio Colangelo, Luca Farnia
Abstract:
In recent decades, the hyper-competition of tourism scenario has implicated the maturity of many businesses, attributing a central role to innovative processes and their dissemination in the economy of company management. At the same time, it has defined the need for monitoring the application of innovations, in order to govern and improve the performance of companies and destinations. The study aims to analyze and define the innovation in the tourism sector. The research actions have concerned, on the one hand, some in-depth interviews with experts, identifying innovation in terms of process and product, digitalization, sustainability policies and, on the other hand, to evaluate the interaction between these factors, in terms of substitutability and complementarity in management scenarios, in order to identify which one is essential to be competitive in the global scenario. Fuzzy measures and Choquet integral were used to elicit Experts’ preferences. This method allows not only to evaluate the relative importance of each pillar, but also and more interestingly, the level of interaction, ranging from complementarity to substitutability, between pairs of factors. The results of the survey are the following: in terms of Shapley values, Experts assert that Innovation is the most important factor (32.32), followed by digitalization (31.86), Network (20.57) and Sustainability (15.25). In terms of Interaction indices, given the low degree of consensus among experts, the interaction between couples of criteria on average could be ignored; however, it is worth to note that the factors innovations and digitalization are those in which experts express the highest degree of interaction. However for some of them, these factors have a moderate level of complementarity (with a pick of 57.14), and others consider them moderately substitutes (with a pick of -39.58). Another example, although outlier is the interaction between network and digitalization, in which an expert consider them markedly substitutes (-77.08).Keywords: innovation, business model, tourism, fuzzy
Procedia PDF Downloads 2743041 Air Quality Assessment for a Hot-Spot Station by Neural Network Modelling of the near-Traffic Emission-Immission Interaction
Authors: Tim Steinhaus, Christian Beidl
Abstract:
Urban air quality and climate protection are two major challenges for future mobility systems. Despite the steady reduction of pollutant emissions from vehicles over past decades, local immission load within cities partially still reaches heights, which are considered hazardous to human health. Although traffic-related emissions account for a major part of the overall urban pollution, modeling the exact interaction remains challenging. In this paper, a novel approach for the determination of the emission-immission interaction on the basis of neural network modeling for traffic induced NO2-immission load within a near-traffic hot-spot scenario is presented. In a detailed sensitivity analysis, the significance of relevant influencing variables on the prevailing NO2 concentration is initially analyzed. Based on this, the generation process of the model is described, in which not only environmental influences but also the vehicle fleet composition including its associated segment- and certification-specific real driving emission factors are derived and used as input quantities. The validity of this approach, which has been presented in the past, is re-examined in this paper using updated data on vehicle emissions and recent immission measurement data. Within the framework of a final scenario analysis, the future development of the immission load is forecast for different developments in the vehicle fleet composition. It is shown that immission levels of less than half of today’s yearly average limit values are technically feasible in hot-spot situations.Keywords: air quality, emission, emission-immission-interaction, immission, NO2, zero impact
Procedia PDF Downloads 1283040 Next-Gen Solutions: How Generative AI Will Reshape Businesses
Authors: Aishwarya Rai
Abstract:
This study explores the transformative influence of generative AI on startups, businesses, and industries. We will explore how large businesses can benefit in the area of customer operations, where AI-powered chatbots can improve self-service and agent effectiveness, greatly increasing efficiency. In marketing and sales, generative AI could transform businesses by automating content development, data utilization, and personalization, resulting in a substantial increase in marketing and sales productivity. In software engineering-focused startups, generative AI can streamline activities, significantly impacting coding processes and work experiences. It can be extremely useful in product R&D for market analysis, virtual design, simulations, and test preparation, altering old workflows and increasing efficiency. Zooming into the retail and CPG industry, industry findings suggest a 1-2% increase in annual revenues, equating to $400 billion to $660 billion. By automating customer service, marketing, sales, and supply chain management, generative AI can streamline operations, optimizing personalized offerings and presenting itself as a disruptive force. While celebrating economic potential, we acknowledge challenges like external inference and adversarial attacks. Human involvement remains crucial for quality control and security in the era of generative AI-driven transformative innovation. This talk provides a comprehensive exploration of generative AI's pivotal role in reshaping businesses, recognizing its strategic impact on customer interactions, productivity, and operational efficiency.Keywords: generative AI, digital transformation, LLM, artificial intelligence, startups, businesses
Procedia PDF Downloads 803039 Social Networks in Business: The Complex Concept of Wasta and the Impact of Islam on the Perception of This Practice
Authors: Sa'ad Ali
Abstract:
This study explores wasta as an example of a social network and how it impacts business practice in the Arab Middle East, drawing links with social network impact in different regions of the world. In doing so, particular attention will be paid to the socio-economic and cultural influences on business practice. In exploring relationships in business, concepts such as social network analysis, social capital and group identity are used to explore the different forms of social networks and how they influence business decisions and practices in the regions and countries where they prevail. The use of social networks to achieve objectives is known as guanxi in China, wasta in the Arab Middle East and blat in ex-Soviet countries. Wasta can be defined as favouritism based on tribal and family affiliation and is a widespread practice that has a substantial impact on political, social and business interactions in the Arab Middle East. Within the business context, it is used in several ways, such as to secure a job or promotion or to cut through bureaucracy in government interactions. The little research available is fragmented, and most studies reveal a negative attitude towards its usage in business. Paradoxically, while wasta is widely practised, people from the Arab Middle East often deny its influence. Moreover, despite the regular exhibition of a negative opinion on the practice of wasta, it can also be a source of great pride. This paper addresses this paradox by conducting a positional literature review, exploring the current literature on wasta and identifying how the identified paradox can be explained. The findings highlight how wasta, to a large extent, has been treated as an umbrella concept, whilst it is a highly complex practice which has evolved from intermediary wasta to intercessory wasta and therefore from bonding social capital relationships to more bridging social capital relationships. In addition, the research found that Islam, as the predominant religion in the region and the main source of ethical guidance for the majority of people from the region, plays a substantial role in this paradox. Specifically, it is submitted that wasta can be viewed positively in Islam when it is practised to aid others without breaking Islamic ethical guidelines, whilst it can be viewed negatively when it is used in contradiction with the teachings of Islam. As such, the unique contribution to knowledge of this study is that it ties together the fragmented literature on wasta, highlighting and helping us understand its complexity. In addition, it sheds light on the role of Islam in wasta practices, aiding our understanding of the paradoxical nature of the practice.Keywords: Islamic ethics, social capital, social networks, Wasta
Procedia PDF Downloads 1493038 Scientific Production on Lean Supply Chains Published in Journals Indexed by SCOPUS and Web of Science Databases: A Bibliometric Study
Authors: T. Botelho de Sousa, F. Raphael Cabral Furtado, O. Eduardo da Silva Ferri, A. Batista, W. Augusto Varella, C. Eduardo Pinto, J. Mimar Santa Cruz Yabarrena, S. Gibran Ruwer, F. Müller Guerrini, L. Adalberto Philippsen Júnior
Abstract:
Lean Supply Chain Management (LSCM) is an emerging research field in Operations Management (OM). As a strategic model that focuses on reduced cost and waste with fulfilling the needs of customers, LSCM attracts great interest among researchers and practitioners. The purpose of this paper is to present an overview of Lean Supply Chains literature, based on bibliometric analysis through 57 papers published in indexed journals by SCOPUS and/or Web of Science databases. The results indicate that the last three years (2015, 2016, and 2017) were the most productive on LSCM discussion, especially in Supply Chain Management and International Journal of Lean Six Sigma journals. India, USA, and UK are the most productive countries; nevertheless, cross-country studies by collaboration among researchers were detected, by social network analysis, as a research practice, appearing to play a more important role on LSCM studies. Despite existing limitation, such as limited indexed journal database, bibliometric analysis helps to enlighten ongoing efforts on LSCM researches, including most used technical procedures and collaboration network, showing important research gaps, especially, for development countries researchers.Keywords: Lean Supply Chains, Bibliometric Study, SCOPUS, Web of Science
Procedia PDF Downloads 3493037 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 4923036 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network
Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal
Abstract:
This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography
Procedia PDF Downloads 1493035 Functional Profiling of a Circular RNA from the Huntingtin (HTT) Gene
Authors: Laura Gantley, Vanessa M. Conn, Stuart Webb, Kirsty Kirk, Marta Gabryelska, Duncan Holds, Brett W. Stringer, Simon J. Conn
Abstract:
Trinucleotide repeat disorders comprise ~20 severe, inherited human neuromuscular and neurodegenerative disorders, which are a result of an abnormal expansion of repetitive sequences in the DNA. The most common of these, Huntington’s disease, results from the expansion of the CAG repeat region in exon 1 of the HTT gene via an unknown mechanism. Non-coding RNAs have been implicated in the initiation and progression of many diseases; thus, we focus on one circular RNA (circRNA) molecule arising from non-canonical splicing (back splicing) of HTT pre-mRNA. This circRNA and its mouse orthologue were transgenically overexpressed in human cells (SHSY-5Y and HEK293T) and mouse cells (Mb1), respectively. High-content imaging and flow cytometry demonstrated the overexpression of this circRNA reduces cell proliferation, reduces nuclear size independent of cellular size, and alters cell cycle progression. Analysis of protein by western blot and immunofluorescence demonstrated no change to HTT protein levels but altered nuclear-cytoplasmic distribution without impacting the expansion of the HTT repeat region. As these phenotypic and genotypic changes are found in Huntington’s disease patients, these results may suggest that this circRNA may play a functional role in the progression of Huntington’s disease.Keywords: cell biology, circular RNAs, Huntington’s disease, molecular biology, neurodegenerative disorders
Procedia PDF Downloads 1023034 Korean Smart Cities: Strategic Foci, Characteristics and Effects
Authors: Sang Ho Lee, Yountaik Leem
Abstract:
This paper reviews Korean cases of smart cities through the analysis framework of strategic foci, characteristics and effects. Firstly, national strategies including c(cyber), e(electronic), u(ubiquitous) and s(smart) Korea strategies were considered from strategic angles. Secondly, the characteristics of smart cities in Korea were looked through the smart cities examples such as Seoul, Busan, Songdo and Sejong cities etc. from the views on the by STIM (Service, Technology, Infrastructure and Management) analysis. Finally, the effects of smart cities on socio-economies were investigated from industrial perspective using the input-output model and structural path analysis. Korean smart city strategies revealed that there were different kinds of strategic foci. c-Korea strategy focused on information and communications network building and user IT literacy. e-Korea strategy encouraged e-government and e-business through utilizing high-speed information and communications network. u-Korea strategy made ubiquitous service as well as integrated information and communication operations center. s-Korea strategy is propelling 4th industrial platform. Smart cities in Korea showed their own features and trends such as eco-intelligence, high efficiency and low cost oriented IoT, citizen sensored city, big data city. Smart city progress made new production chains fostering ICTs (Information Communication Technologies) and knowledge intermediate inputs to industries.Keywords: Korean smart cities, Korean smart city strategies, STIM, smart service, infrastructure, technologies, management, effect of smart city
Procedia PDF Downloads 3703033 A Comparative Study for Various Techniques Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifyig the red blood cells as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectivelyKeywords: red blood cells, classification, radial basis function neural networks, suport vector machine, k-nearest neighbors algorithm
Procedia PDF Downloads 4823032 Processing and Modeling of High-Resolution Geophysical Data for Archaeological Prospection, Nuri Area, Northern Sudan
Authors: M. Ibrahim Ali, M. El Dawi, M. A. Mohamed Ali
Abstract:
In this study, the use of magnetic gradient survey, and the geoelectrical ground methods used together to explore archaeological features in Nuri’s pyramids area. Research methods used and the procedures and methodologies have taken full right during the study. The magnetic survey method was used to search for archaeological features using (Geoscan Fluxgate Gradiometer (FM36)). The study area was divided into a number of squares (networks) exactly equal (20 * 20 meters). These squares were collected at the end of the study to give a major network for each region. Networks also divided to take the sample using nets typically equal to (0.25 * 0.50 meter), in order to give a more specific archaeological features with some small bipolar anomalies that caused by buildings built from fired bricks. This definition is important to monitor many of the archaeological features such as rooms and others. This main network gives us an integrated map displayed for easy presentation, and it also allows for all the operations required using (Geoscan Geoplot software). The parallel traverse is the main way to take readings of the magnetic survey, to get out the high-quality data. The study area is very rich in old buildings that vary from small to very large. According to the proportion of the sand dunes and the loose soil, most of these buildings are not visible from the surface. Because of the proportion of the sandy dry soil, there is no connection between the ground surface and the electrodes. We tried to get electrical readings by adding salty water to the soil, but, unfortunately, we failed to confirm the magnetic readings with electrical readings as previously planned.Keywords: archaeological features, independent grids, magnetic gradient, Nuri pyramid
Procedia PDF Downloads 4853031 A Proposal to Tackle Security Challenges of Distributed Systems in the Healthcare Sector
Authors: Ang Chia Hong, Julian Khoo Xubin, Burra Venkata Durga Kumar
Abstract:
Distributed systems offer many benefits to the healthcare industry. From big data analysis to business intelligence, the increased computational power and efficiency from distributed systems serve as an invaluable resource in the healthcare sector to utilize. However, as the usage of these distributed systems increases, many issues arise. The main focus of this paper will be on security issues. Many security issues stem from distributed systems in the healthcare industry, particularly information security. The data of people is especially sensitive in the healthcare industry. If important information gets leaked (Eg. IC, credit card number, address, etc.), a person’s identity, financial status, and safety might get compromised. This results in the responsible organization losing a lot of money in compensating these people and even more resources expended trying to fix the fault. Therefore, a framework for a blockchain-based healthcare data management system for healthcare was proposed. In this framework, the usage of a blockchain network is explored to store the encryption key of the patient’s data. As for the actual data, it is encrypted and its encrypted data, called ciphertext, is stored in a cloud storage platform. Furthermore, there are some issues that have to be emphasized and tackled for future improvements, such as a multi-user scheme that could be proposed, authentication issues that have to be tackled or migrating the backend processes into the blockchain network. Due to the nature of blockchain technology, the data will be tamper-proof, and its read-only function can only be accessed by authorized users such as doctors and nurses. This guarantees the confidentiality and immutability of the patient’s data.Keywords: distributed, healthcare, efficiency, security, blockchain, confidentiality and immutability
Procedia PDF Downloads 1883030 Application of Raman Spectroscopy for Ovarian Cancer Detection: Comparative Analysis of Fresh, Formalin-Fixed, and Paraffin-Embedded Samples
Authors: Zeinab Farhat, Nicolas Errien, Romuald Wernert, Véronique Verriele, Frédéric Amiard, Philippe Daniel
Abstract:
Ovarian cancer, also known as the silent killer, is the fifth most common cancer among women worldwide, and its death rate is higher than that of other gynecological cancers. The low survival rate of women with high-grade serous ovarian carcinoma highlights the critical need for the development of new methods for early detection and diagnosis of the disease. The aim of this study was to evaluate if Raman spectroscopy combined with chemometric methods such as Principal Component Analysis (PCA) could differentiate between cancerous and normal tissues from different types of samples, such as paraffin embedding, chemical deparaffinized, formalin-fixed and fresh samples of the same normal and malignant ovarian tissue. The method was applied specifically to two critical spectral regions: the signature region (860-1000 〖cm〗^(-1)) and the high-frequency region (2800-3100 〖cm〗^(-1) ). The mean spectra of paraffin-embedded in normal and malignant tissues showed almost similar intensity. On the other hand, the mean spectra of normal and cancer tissues from chemical deparaffinized, formalin-fixed, and fresh samples show significant intensity differences. These spectral differences reflect variations in the molecular composition of the tissues, particularly lipids and proteins. PCA, which was applied to distinguish between cancer and normal tissues, was performed on whole spectra and on selected regions—the PCA score plot of paraffin-embedded shows considerable overlap between the two groups. However, the PCA score of chemicals deparaffinized, formalin-fixed, and fresh samples showed a good discrimination of tissue types. Our findings were validated by analyses of a set of samples whose status (normal and cancerous) was not previously known. The results of this study suggest that Raman Spectroscopy associated with PCA methods has the capacity to provide clinically significant differentiation between normal and cancerous ovarian tissues.Keywords: Raman spectroscopy, ovarian cancer, signal processing, Principal Component Analysis, classification
Procedia PDF Downloads 323029 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications
Authors: Farhad Salek, Shahaboddin Resalati
Abstract:
The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.Keywords: second life battery, electric vehicles, degradation, neural network
Procedia PDF Downloads 69