Search results for: secure hashing algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4223

Search results for: secure hashing algorithm

2333 An Effective Route to Control of the Safety of Accessing and Storing Data in the Cloud-Based Data Base

Authors: Omid Khodabakhshi, Amir Rozdel

Abstract:

The subject of cloud computing security research has allocated a number of challenges and competitions because the data center is comprised of complex private information and are always faced various risks of information disclosure by hacker attacks or internal enemies. Accordingly, the security of virtual machines in the cloud computing infrastructure layer is very important. So far, there are many software solutions to develop security in virtual machines. But using software alone is not enough to solve security problems. The purpose of this article is to examine the challenges and security requirements for accessing and storing data in an insecure cloud environment. In other words, in this article, a structure is proposed for the implementation of highly isolated security-sensitive codes using secure computing hardware in virtual environments. It also allows remote code validation with inputs and outputs. We provide these security features even in situations where the BIOS, the operating system, and even the super-supervisor are infected. To achieve these goals, we will use the hardware support provided by the new Intel and AMD processors, as well as the TPM security chip. In conclusion, the use of these technologies ultimately creates a root of dynamic trust and reduces TCB to security-sensitive codes.

Keywords: code, cloud computing, security, virtual machines

Procedia PDF Downloads 188
2332 Food Traceability for Small and Medium Enterprises Using Blockchain Technology

Authors: Amit Kohli, Pooja Lekhi, Gihan Adel Amin Hafez

Abstract:

Blockchain is a distributor ledger technology trend that extended to different fields and proved a remarkable success. Blockchain technology is a vital proliferation technique that recuperates the food supply chain traceability process. While tracing is the core of the food supply chain; still, a complex system mitigates the exceptional risk of food contamination, foodborne, food waste, and food fraud. In addition, the upsurge of food supply chain data variance and variety in the traceability system requires complete transparency, a secure, steadfast, sustainable, and efficient approach to face the food supply chain challenges. On the other hand, blockchain technical aspects merged with a detailed implementation plan, the advantages and challenges in food traceability have not been much elucidated for small and medium enterprises (SMEs.) This paper demonstrated the advantages and challenges of applying blockchain in SMEs combined with the success stories of firms implementing blockchain to cover the gap. Moreover, blockchain architecture in SMEs and how technology, organization, and environment frameworks can guarantee the success of blockchain implementation have been revealed.

Keywords: blockchain technology, small and medium enterprises, food traceability, blockchain architecture

Procedia PDF Downloads 185
2331 A New Approach in a Problem of a Supersonic Panel Flutter

Authors: M. V. Belubekyan, S. R. Martirosyan

Abstract:

On the example of an elastic rectangular plate streamlined by a supersonic gas flow, we have investigated the phenomenon of divergence and of panel flatter of the overrunning of the gas flow at a free edge under assumption of the presence of concentrated inertial masses and moments at the free edge. We applied a new approach of finding of solution of these problems, which was developed based on the algorithm for an analytical solution finding. This algorithm is easy to use for theoretical studies for the wides circle of nonconservative problems of linear elastic stability. We have established the relation between the characteristics of natural vibrations of the plate and velocity of the streamlining gas flow, which enables one to draw some conclusions on the stability of disturbed motion of the plate depending on the parameters of the system plate-flow. Its solution shows that either the divergence or the localized divergence and the flutter instability are possible. The regions of the stability and instability in space of parameters of the problem are identified. We have investigated the dynamic behavior of the disturbed motion of the panel near the boundaries of region of the stability. The safe and dangerous boundaries of region of the stability are found. The transition through safe boundary of the region of the stability leads to the divergence or localized divergence arising in the vicinity of free edge of the rectangular plate. The transition through dangerous boundary of the region of the stability leads to the panel flutter. The deformations arising at the flutter are more dangerous to the skin of the modern aircrafts and rockets resulting to the loss of the strength and appearance of the fatigue cracks.

Keywords: stability, elastic plate, divergence, localized divergence, supersonic panels flutter

Procedia PDF Downloads 458
2330 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar

Authors: Shaolin Allen Liao, Hual-Te Chien

Abstract:

Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.

Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar

Procedia PDF Downloads 339
2329 Efficient Fuzzy Classified Cryptographic Model for Intelligent Encryption Technique towards E-Banking XML Transactions

Authors: Maher Aburrous, Adel Khelifi, Manar Abu Talib

Abstract:

Transactions performed by financial institutions on daily basis require XML encryption on large scale. Encrypting large volume of message fully will result both performance and resource issues. In this paper a novel approach is presented for securing financial XML transactions using classification data mining (DM) algorithms. Our strategy defines the complete process of classifying XML transactions by using set of classification algorithms, classified XML documents processed at later stage using element-wise encryption. Classification algorithms were used to identify the XML transaction rules and factors in order to classify the message content fetching important elements within. We have implemented four classification algorithms to fetch the importance level value within each XML document. Classified content is processed using element-wise encryption for selected parts with "High", "Medium" or “Low” importance level values. Element-wise encryption is performed using AES symmetric encryption algorithm and proposed modified algorithm for AES to overcome the problem of computational overhead, in which substitute byte, shift row will remain as in the original AES while mix column operation is replaced by 128 permutation operation followed by add round key operation. An implementation has been conducted using data set fetched from e-banking service to present system functionality and efficiency. Results from our implementation showed a clear improvement in processing time encrypting XML documents.

Keywords: XML transaction, encryption, Advanced Encryption Standard (AES), XML classification, e-banking security, fuzzy classification, cryptography, intelligent encryption

Procedia PDF Downloads 405
2328 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 270
2327 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 261
2326 Assessment of Exploitation Vulnerability of Quantum Communication Systems with Phase Encryption

Authors: Vladimir V. Nikulin, Bekmurza H. Aitchanov, Olimzhon A. Baimuratov

Abstract:

Quantum communication technology takes advantage of the intrinsic properties of laser carriers, such as very high data rates and low power requirements, to offer unprecedented data security. Quantum processes at the physical layer of encryption are used for signal encryption with very competitive performance characteristics. The ultimate range of applications for QC systems spans from fiber-based to free-space links and from secure banking operations to mobile airborne and space-borne networking where they are subjected to channel distortions. Under practical conditions, the channel can alter the optical wave front characteristics, including its phase. In addition, phase noise of the communication source and photo-detection noises alter the signal to bring additional ambiguity into the measurement process. If quantized values of photons are used to encrypt the signal, exploitation of quantum communication links becomes extremely difficult. In this paper, we present the results of analysis and simulation studies of the effects of noise on phase estimation for quantum systems with different number of encryption bases and operating at different power levels.

Keywords: encryption, phase distortion, quantum communication, quantum noise

Procedia PDF Downloads 550
2325 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 317
2324 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 119
2323 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 579
2322 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 280
2321 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 245
2320 A Conceptual Framework for the Adoption of Information and Communication Technology for Anti-Corruption in the DR Congo

Authors: Itulelo Matiyabu Imaja, Patrick Ndayizigamiye, Manoj Maharaj

Abstract:

There are many catalysts of corruption. These include amongst others, lack of effective control measures to deter or detect corrupt behaviour. Literature suggests that ICT could assist in curbing corruption through the implementation of automated systems, citizens engagement through e-government and online media to name a few. In the Democratic Republic of Congo, lack of transparency and accountability in public funds collection and allocation contribute to corruption in funds mismanagement. Using the accountability theory and available literature, this paper analyses how Democratic Republic of Congo (DRC) institutions could be strengthened through ICT in order to deter instances of corruption. Findings reveal that DRC lacks reliable control, monitoring and evaluation mechanisms that could identify potentially corrupt behavior. In addition, citizens and civil society organizations who are meant to hold the institutions accountable are not given secure platform to express their views and potentially flag any corrupt behavior. Hence, the paper presents a preliminary conceptual framework that depicts how ICT could be used to strengthen current institutions to potentially deter corrupt behavior in public funds management in Congo.

Keywords: corruption, ICT adoption, transparency, DR Congo

Procedia PDF Downloads 181
2319 Lightweight Cryptographically Generated Address for IPv6 Neighbor Discovery

Authors: Amjed Sid Ahmed, Rosilah Hassan, Nor Effendy Othman

Abstract:

Limited functioning of the Internet Protocol version 4 (IPv4) has necessitated the development of the Internetworking Protocol next generation (IPng) to curb the challenges. Indeed, the IPng is also referred to as the Internet Protocol version 6 (IPv6) and includes the Neighbor Discovery Protocol (NDP). The latter performs the role of Address Auto-configuration, Router Discovery (RD), and Neighbor Discovery (ND). Furthermore, the role of the NDP entails redirecting the service, detecting the duplicate address, and detecting the unreachable services. Despite the fact that there is an NDP’s assumption regarding the existence of trust the links’ nodes, several crucial attacks may affect the Protocol. Internet Engineering Task Force (IETF) therefore has recommended implementation of Secure Neighbor Discovery Protocol (SEND) to tackle safety issues in NDP. The SEND protocol is mainly used for validation of address rights, malicious response inhibiting techniques and finally router certification procedures. For routine running of these tasks, SEND utilizes on the following options, Cryptographically Generated Address (CGA), RSA Signature, Nonce and Timestamp option. CGA is produced at extra high costs making it the most notable disadvantage of SEND. In this paper a clear description of the constituents of CGA, its operation and also recommendations for improvements in its generation are given.

Keywords: CGA, IPv6, NDP, SEND

Procedia PDF Downloads 381
2318 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 117
2317 Smart Side View Mirror Camera for Real Time System

Authors: Nunziata Ivana Guarneri, Arcangelo Bruna, Giuseppe Spampinato, Antonio Buemi

Abstract:

In the last decade, automotive companies have invested a lot in terms of innovation about many aspects regarding the automatic driver assistance systems. One innovation regards the usage of a smart camera placed on the car’s side mirror for monitoring the back and lateral road situation. A common road scenario is the overtaking of the preceding car and, in this case, a brief distraction or a loss of concentration can lead the driver to undertake this action, even if there is an already overtaking vehicle, leading to serious accidents. A valid support for a secure drive can be a smart camera system, which is able to automatically analyze the road scenario and consequentially to warn the driver when another vehicle is overtaking. This paper describes a method for monitoring the side view of a vehicle by using camera optical flow motion vectors. The proposed solution detects the presence of incoming vehicles, assesses their distance from the host car, and warns the driver through different levels of alert according to the estimated distance. Due to the low complexity and computational cost, the proposed system ensures real time performances.

Keywords: camera calibration, ego-motion, Kalman filters, object tracking, real time systems

Procedia PDF Downloads 222
2316 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 301
2315 Factors Affecting Consumers’ Online Shopping Behavior in Vietnam during the COVID-19 Pandemic: A Case Study of Tiki

Authors: Thi Hai Anh Nguyen, Pantea Aria

Abstract:

Tiki is one of the leading e-commerce companies in Viet Nam. Since the beginning of 2020, COVID-19 has been spreading around the world. Thanks to this pandemic, the Tiki platform has many strengths and has faced many threats. Customer behaviour was forecasted to change during the COVID-19 pandemic. The aim of the investigation is (1) Identifying factors affecting online consumer behaviour of Tiki in Ho Chi Minh City, Vietnam, (2) Measuring the level of impact of these factors, and (3) Recommendations for Tiki to improve its business strategy for the next stage. This research studies eight factors and collected 378 online surveys for analysis. Using SPSS software identified five factors (product, price, reliability, and web design) positively influencing customer behaviour. COVID-19 factor does not impact significantly Tiki’s customer behaviour. This research conducted some qualitative interviews to understand shopping experiences and customers’ expectations. One of these interviews’ main points is that Tiki’s customers have high trust in the Tiki brand and its high-quality products. Based on the results, the Tiki corporation should secure its core value. Tiki’s employees and logistics systems should be well-trained and optimized to improve customer experiences.

Keywords: COVID-19, e-commerce, impact, pandemic, Vietnam

Procedia PDF Downloads 158
2314 Present State of Local Public Transportation Service in Local Municipalities of Japan and Its Effects on Population

Authors: Akiko Kondo, Akio Kondo

Abstract:

We are facing regional problems to low birth rate and longevity in Japan. Under this situation, there are some local municipalities which lose their vitality. The aims of this study are to clarify the present state of local public transportation services in local municipalities and relation between local public transportation services and population quantitatively. We conducted a questionnaire survey concerning regional agenda in all local municipalities in Japan. We obtained responses concerning the present state of convenience in use of public transportation and local public transportation services. Based on the data gathered from the survey, it is apparent that we should some sort of measures concerning public transportation services. Convenience in use of public transportation becomes an object of public concern in many rural regions. It is also clarified that some local municipalities introduce a demand bus for the purpose of promotion of administrative and financial efficiency. They also introduce a demand taxi in order to secure transportation to weak people in transportation and eliminate of blank area related to public transportation services. In addition, we construct a population model which includes explanatory variables of present states of local public transportation services. From this result, we can clarify the relation between public transportation services and population quantitatively.

Keywords: public transportation, local municipality, regional analysis, regional issue

Procedia PDF Downloads 398
2313 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 310
2312 Introduction to Multi-Agent Deep Deterministic Policy Gradient

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents

Procedia PDF Downloads 16
2311 Investigation of Public Perception of Air Pollution and Life Quality in Tehran

Authors: Roghayeh Karami, Ahmad Gharaei

Abstract:

Backgrounds and objectives: This study was undertaken at four different sites (north polluted, south polluted, south healthy and north healthy) in Tehran, in order to examine whether there was a relationship between publicly available air quality data and the public’s perception of air quality and to suggest some guidelines for reducing air pollution. Materials and Methods: A total of 200 people were accidentally filled out the research questionnaires at mentioned sites and air quality data were obtained simultaneously from the Air Quality Control Department. Data was analyzed in Excel and SPSS software. Results: Clean air and secure job were of great importance to people comparing to other pleasant aspect of life. Also air pollution and fear of dangerous diseases were the most important of people concerns. The Indies bored /news paper services on air quality were little used by the public as a means of obtaining information on air pollution. Using public transportation and avoid unessential journeys are the most important ways for reducing air pollution. Conclusion: The results reveal that the public’s perception of air quality is not a reliable indicator of the actual levels of air pollution. Current earths to down actions are not effective and enough in reducing air pollution, therefore it seems participatory management and public participation is suitable guideline.

Keywords: air pollution, quality of life, opinion poll, public participation

Procedia PDF Downloads 484
2310 Israel versus Palestine: Politological and Depth-Psychological Aspects

Authors: Harald Haas, Andrea Plaschke

Abstract:

Many of the contemporary major conflicts on this earth could not be solved so far, they either are perpetuated, or they are reflated again and again. Efforts of purely political conflict management or -resolution aim merely at the symptoms of conflict, not its roots. These roots are, in almost every case, also psychological ones. Thus, this contribution aims to shed light on the roots of one of the best known and longest-lasting conflicts: the Palestinian-Israeli one. Methodologies used were the compilation of existing scientific resources, field research in Palestine and Israel, as well as tests conducted with the Adult Attachment Projective in Palestine and Israel. Findings show that the majority of Palestinian, as well as Israeli test participants, show a disorganised attachment pattern which, in connection with the assumption of collective traumatization, seem to be a major obstacle to a lasting and peaceful conflict-resolution between these two peoples. There appears to be no short-term solution for this conflict, especially not within the range of usual Western legislative periods. Both sides ought to be provided with a kind of 'safe haven' over a long period of time, accompanied by a framework of various arrangements of coping with trauma, building lasting and secure relationships, as well as raising and educating present and future generations of Palestinians and Israelis for peace and co-operation with each other.

Keywords: conflict-management, trauma, political psychology, attachment theory

Procedia PDF Downloads 201
2309 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 129
2308 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 278
2307 Blockchain for IoT Security and Privacy in Healthcare Sector

Authors: Umair Shafique, Hafiz Usman Zia, Fiaz Majeed, Samina Naz, Javeria Ahmed, Maleeha Zainab

Abstract:

The Internet of Things (IoT) has become a hot topic for the last couple of years. This innovative technology has shown promising progress in various areas, and the world has witnessed exponential growth in multiple application domains. Researchers are working to investigate its aptitudes to get the best from it by harnessing its true potential. But at the same time, IoT networks open up a new aspect of vulnerability and physical threats to data integrity, privacy, and confidentiality. It's is due to centralized control, data silos approach for handling information, and a lack of standardization in the IoT networks. As we know, blockchain is a new technology that involves creating secure distributed ledgers to store and communicate data. Some of the benefits include resiliency, integrity, anonymity, decentralization, and autonomous control. The potential for blockchain technology to provide the key to managing and controlling IoT has created a new wave of excitement around the idea of putting that data back into the hands of the end-users. In this manuscript, we have proposed a model that combines blockchain and IoT networks to address potential security and privacy issues in the healthcare domain. Then we try to describe various application areas, challenges, and future directions in the healthcare sector where blockchain platforms merge with IoT networks.

Keywords: IoT, blockchain, cryptocurrency, healthcare, consensus, data

Procedia PDF Downloads 170
2306 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 149
2305 Morphology, Chromosome Numbers and Molecular Evidences of Three New Species of Begonia Section Baryandra (Begoniaceae) from Panay Island, Philippines

Authors: Rosario Rivera Rubite, Ching-I Peng, Che-Wei Lin, Mark Hughes, Yoshiko Kono, Kuo-Fang Chung

Abstract:

The flora of Panay Island is under-collected compared with the other islands of the Philippines. In a joint expedition to the island, botanists from Taiwan and the Philippines found three unknown Begonia and compared them with potentially allied species. The three species are clearly assignable to Begonia section Baryandra which is largely endemic to the Philippines. Studies of literature, herbarium specimens, and living plants support the recognition of the three new species: Begonia culasiensis, Begonia merrilliana, and Begonia sykakiengii. Somatic chromosomes at metaphase were determined to be 2n=30 for B. culasiensis and 2n=28 for both B. merrilliana and B. sykakiengii, which are congruent with those of most species in sect. Baryandra. Molecular phylogenetic evidence is consistent with B. culasiensis being a relict from the late Miocene, and with B. merrilliana and B. sykakiengii being younger species of Pleistocene origin. The continuing discovery of endemic Philippine species means the remaining fragments of both primary and secondary native vegetation in the archipelago are of increasing value in terms of natural capital. A secure future for the species could be realized through ex-situ conservation collections and raising awareness with community groups.

Keywords: conservation, endemic , herbarium , limestone, phylogenetics, taxonomy

Procedia PDF Downloads 214
2304 The Influence of Advertising in the Respect of the Right to Adequate Food: Some Notes regarding the Portuguese Legal Framework

Authors: Susana Almeida

Abstract:

The right to adequate food is a human right protected under several international human rights treaties of universal or regional application. In addition, this social right is – as we intend to demonstrate – guaranteed under the Portuguese Constitution. Therefore, in order to assure the protection of this right, the Portuguese State must not only abstain from interfering with this human right (negative obligation) but also take action to secure the human right to adequate food (positive obligation). In this context, the Portuguese State has developed several governmental policies, such as taxing sugary drinks, setting the maximum amount of salt in the bread or creating the National Program for the Promotion of Healthy Food. Nevertheless, we intend to demonstrate that special attention should be given to advertising, as advertisements have an extreme influence on the consumers' decisions and hence on the food decisions. In this paper, besides explaining the cross construction of the human right to adequate food, we aim to examine the Advertising Portuguese Code and to study the several provisions that could be held by the Portuguese consumer to challenge some advertisements due to the violation of the right to health and the right to adequate food. Moreover, having in mind the influence of advertising on the food decisions and the serious problems that unhealthy food may bring (e.g., child obesity), one should ask if this legal framework should not be reviewed in order to lay out some restrictions on advertising, namely setting advices like in alcohol advertisements.

Keywords: advertising code, consumer law, right to adequate food, social human right

Procedia PDF Downloads 165