Search results for: mathematical algorithms of targeting and persecution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4251

Search results for: mathematical algorithms of targeting and persecution

2511 Image Compression Using Block Power Method for SVD Decomposition

Authors: El Asnaoui Khalid, Chawki Youness, Aksasse Brahim, Ouanan Mohammed

Abstract:

In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.

Keywords: image compression, SVD, block SVD power method, lossless compression, near lossless

Procedia PDF Downloads 367
2510 Performance Analysis of Artificial Neural Network Based Land Cover Classification

Authors: Najam Aziz, Nasru Minallah, Ahmad Junaid, Kashaf Gul

Abstract:

Landcover classification using automated classification techniques, while employing remotely sensed multi-spectral imagery, is one of the promising areas of research. Different land conditions at different time are captured through satellite and monitored by applying different classification algorithms in specific environment. In this paper, a SPOT-5 image provided by SUPARCO has been studied and classified in Environment for Visual Interpretation (ENVI), a tool widely used in remote sensing. Then, Artificial Neural Network (ANN) classification technique is used to detect the land cover changes in Abbottabad district. Obtained results are compared with a pixel based Distance classifier. The results show that ANN gives the better overall accuracy of 99.20% and Kappa coefficient value of 0.98 over the Mahalanobis Distance Classifier.

Keywords: landcover classification, artificial neural network, remote sensing, SPOT 5

Procedia PDF Downloads 521
2509 The Plight of the Rohingyas: Design Guidelines to Accommodate Displaced People in Bangladesh

Authors: Nazia Roushan, Maria Kipti

Abstract:

The sensitive issue of a large-scale entry of Rohingya refugees to Bangladesh has arisen again since August of 2017. Incited by ethnic and religious conflict, the Rohingyas—an ethnic group concentrated in the north-west state of Rakhine in Myanmar—have been fleeing to what is now Bangladesh from as early as the late 1700s in four main exoduses. This long-standing persecution has recently escalated, and accommodating the recent wave of exodus has been especially challenging due to the sheer volume of a million refugees concentrated in refugee camps in two small administrative units (upazilas) in the south-east of the country: the host area. This drastic change in the host area’s social fabric is putting a lot of strain on the country’s economic, demographic and environmental stability, and security. Although Bangladesh’s long-term experience with disaster management has enabled it to respond rapidly to the crisis, the government is failing to cope with this enormous problem and has taken insufficient steps towards improving the living conditions to inhibit the inflow of more refugees. On top of that, the absence of a comprehensive national refugee policy, and the density of the structures of the camps are constricting the upgrading of the shelters to international standards. As of December 2016, the combined number of internally displaced persons (IDPs) due to conflict and violence (stock), and new displacements due to disasters (flow) in Bangladesh had exceeded 1 million. These numbers have increased dramatically in the last few months. Moreover, by 2050, Bangladesh will have as much as 25 million climate refugees just from its coastal districts. To enhance the resilience of the vulnerable, it is crucial to methodically factorize further interventions between Disaster Risk Reduction for Resilience (DRR) and the concept of Building Back Better (BBB) in the rehabilitation-reconstruction period. Considering these points, this paper provides a palette of options for design guidelines related to the living spaces and infrastructures for refugees. This will encourage the development of national standards for refugee camps, and the national and local level rehabilitation-reconstruction practices. Unhygienic living conditions, vulnerability, and the general lack of control over life are pervasive throughout the camps. This paper, therefore, proposes site-specific strategic and physical planning and design for shelters for refugees in Bangladesh that will lead to sustainable living environments through the following: a) site survey of existing two registered and one makeshift unregistered refugee camps to document and study their physical conditions, b) questionnaires and semi-structured focus group discussions carried out among the refugees and stakeholders to understand what the lived experiences and needs are; and c) combining the findings with international minimum standards for shelter and settlement from International Federation of Red Cross and Red Crescent (IFRC), Médecins Sans Frontières (MSF), United Nations High Commissioner for Refugees (UNHCR). These proposals include temporary shelter solutions that balance between lived spaces and regimented, repetitive plans using readily available and cheap materials, erosion control and slope stabilization strategies, and most importantly, coping mechanisms for the refugees to be self-reliant and resilient.

Keywords: architecture, Bangladesh, refugee camp, resilience, Rohingya

Procedia PDF Downloads 220
2508 Globalisation and the Resulting Labour Exploitation in Business Operations and Supply Chains

Authors: Akilah A. Jardine

Abstract:

The integration and expansion of the global economy have indeed brought about a number of positive changes such as access to new goods and services and the opportunity for individuals and businesses to migrate, communicate, and work globally. Nevertheless, the interconnectedness of world economies is not without its negative and shameful side effects. The subsequent overabundance of goods and services has resulted in heightened competition among firms and their supply chains, fuelling the exploitation of impoverished and vulnerable individuals who are unable to equally salvage from the benefits of the integrated economy. To maintain their position in a highly competitive arena, the operations of many businesses have adopted unethical and unscrupulous practices to maximise profit, often targeting the most marginalised members of society. Simultaneously, in a consumerist obsessed society preoccupied with the consumption and accumulation of material wealth, the demand for goods and services greatly contributes to the pressure on firms, thus bolstering the exploitation of labour. This paper aims to examine the impact of business operations on the practice of labour exploitation. It explores corrupt business practices that firms adopt and key labour exploitative conditions outlined by the International Labour Organization, particularly, paying workers low wages, forcing individuals to work in abusive and unsafe conditions, and considers the issue regarding individuals’ consent to exploitative environments. Further, it considers the role of consumers in creating the high demand for goods and services, which in turn fosters the exploitation of labour. This paper illustrates that the practice of labour exploitation in the economy is a by-product of both global competitive business operations and heightened consumer consumption.

Keywords: globalisation, labour exploitation, modern slavery, sweatshops, unethical business practices

Procedia PDF Downloads 124
2507 Machine Learning Techniques to Develop Traffic Accident Frequency Prediction Models

Authors: Rodrigo Aguiar, Adelino Ferreira

Abstract:

Road traffic accidents are the leading cause of unnatural death and injuries worldwide, representing a significant problem of road safety. In this context, the use of artificial intelligence with advanced machine learning techniques has gained prominence as a promising approach to predict traffic accidents. This article investigates the application of machine learning algorithms to develop traffic accident frequency prediction models. Models are evaluated based on performance metrics, making it possible to do a comparative analysis with traditional prediction approaches. The results suggest that machine learning can provide a powerful tool for accident prediction, which will contribute to making more informed decisions regarding road safety.

Keywords: machine learning, artificial intelligence, frequency of accidents, road safety

Procedia PDF Downloads 68
2506 An Overview of New Era in Food Science and Technology

Authors: Raana Babadi Fathipour

Abstract:

Strict prerequisites of logical diaries united ought to demonstrate the exploratory information is (in)significant from the statistical point of view and has driven a soak increment within the utilization and advancement of the factual program. It is essential that the utilization of numerical and measurable strategies, counting chemometrics and many other factual methods/algorithms in nourishment science and innovation has expanded steeply within the final 20 a long time. Computational apparatuses accessible can be utilized not as it were to run factual investigations such as univariate and bivariate tests as well as multivariate calibration and improvement of complex models but also to run reenactments of distinctive scenarios considering a set of inputs or essentially making expectations for particular information sets or conditions. Conducting a fast look within the most legitimate logical databases (Pubmed, ScienceDirect, Scopus), it is conceivable to watch that measurable strategies have picked up a colossal space in numerous regions.

Keywords: food science, food technology, food safety, computational tools

Procedia PDF Downloads 49
2505 Phylogenetic Differential Separation of Environmental Samples

Authors: Amber C. W. Vandepoele, Michael A. Marciano

Abstract:

Biological analyses frequently focus on single organisms, however many times, the biological sample consists of more than the target organism; for example, human microbiome research targets bacterial DNA, yet most samples consist largely of human DNA. Therefore, there would be an advantage to removing these contaminating organisms. Conversely, some analyses focus on a single organism but would greatly benefit from the additional information regarding the other organismal components of the sample. Forensic analysis is one such example, wherein most forensic casework, human DNA is targeted; however, it typically exists in complex non-pristine sample substrates such as soil or unclean surfaces. These complex samples are commonly comprised of not just human tissue but also microbial and plant life, where these organisms may help gain more forensically relevant information about a specific location or interaction. This project aims to optimize a ‘phylogenetic’ differential extraction method that will separate mammalian, bacterial and plant cells in a mixed sample. This is accomplished through the use of size exclusion separation, whereby the different cell types are separated through multiple filtrations using 5 μm filters. The components are then lysed via differential enzymatic sensitivities among the cells and extracted with minimal contribution from the preceding component. This extraction method will then allow complex DNA samples to be more easily interpreted through non-targeting sequencing since the data will not be skewed toward the smaller and usually more numerous bacterial DNAs. This research project has demonstrated that this ‘phylogenetic’ differential extraction method successfully separated the epithelial and bacterial cells from each other with minimal cell loss. We will take this one step further, showing that when adding the plant cells into the mixture, they will be separated and extracted from the sample. Research is ongoing, and results are pending.

Keywords: DNA isolation, geolocation, non-human, phylogenetic separation

Procedia PDF Downloads 100
2504 The Sub-Optimality of the Electricity Subsidy on Tube Wells in Balochistan (Pakistan): An Analysis Based on Socio-Cultural and Policy Distortions

Authors: Rameesha Javaid

Abstract:

Agriculture is the backbone of the economy of the province of Balochistan which is known as the ‘fruit basket’ of Pakistan. Its climate zones comprising highlands and plateaus, dependent on rain water, are more suited for the production of deciduous fruit. The vagaries of weather and more so the persistent droughts prompted the government to announce flat rates of electricity bills per month irrespective of the size of the farm, quantum or water used and the category of crop group. That has, no doubt, resulted in increased cropping intensity, more production and employment but has enormously burdened the official exchequer which picks up the residual bills in certain percentages amongst the federal and provincial governments and the local electricity company. This study tests the desirability of continuing the subsidy in the present mode. Optimization of social welfare of farmers has been the focus of the study with emphasis on the contribution of positive externalities and distortions caused in terms of negative externalities. By using the optimization technique with due allowance for distortions, it has been established that the subsidy calls for limiting policy distortions as they cause sub-optimal utilization of the tube well subsidy and improved policy programming. The sensitivity analysis with changed rankings of contributing variables towards social welfare does not significantly change the result. Therefore it leads to the net findings and policy recommendations of significantly reducing the subsidy size, correcting and curtailing policy distortions and targeting the subsidy grant more towards small farmers to generate more welfare by saving a sizeable amount from the subsidy for investment in the wellbeing of the farmers in rural Balochistan.

Keywords: distortion, policy distortion, socio-cultural distortion, social welfare, subsidy

Procedia PDF Downloads 275
2503 Molecular Epidemiology of Circulating Adenovirus Types in Acute Conjunctivitis Cases in Chandigarh, North India

Authors: Mini P. Singh, Jagat Ram, Archit Kumar, Tripti Rungta, Jasmine Khurana, Amit Gupta, R. K. Ratho

Abstract:

Introduction: Human adenovirus is the most common agent involved in viral conjunctivitis. The clinical manifestations vary with different serotypes. The identification of the circulating strains followed by phylogenetic analysis can be helpful in understanding the origin and transmission of the disease. The present study aimed to carry out molecular epidemiology of the adenovirus types in the patients with conjunctivitis presenting to the eye centre of a tertiary care hospital in North India. Materials and Methods: The conjunctival swabs were collected from 23 suspected adenoviral conjunctivitis patients between April-August, 2014 and transported in viral transport media. The samples were subjected to nested PCR targeting hexon gene of human adenovirus. The band size of 956bp was eluted and 8 representative positive samples were subjected to sequencing. The sequences were analyzed by using CLUSTALX2.1 and MEGA 5.1 software. Results: The male: female ratio was found to be 3.6:1. The mean age of presenting patients was 43.95 years (+17.2). Approximately 52.1% (12/23) of patients presented with bilateral involvement while 47.8% (11/23) with unilateral involvement of the eye. Human adenovirus DNA could be detected in 65.2% (15/23) of the patients. The phylogenetic analysis revealed presence of serotype 8 in 7 patients and serotype 4 in one patient. The serotype 8 sequences showed 99-100% identity with Tunisian, Indian and Japanese strains. The adenovirus serotype 4 strains had 100% identity with strains from Tunisia, China and USA. Conclusion: Human adenovirus was found be an important etiological agent for conjunctivitis in our set up. The phylogenetic analysis showed that the predominant circulating strains in our epidemic keratoconjunctivitis were serotypes 8 and 4.

Keywords: conjunctivitis, human adenovirus, molecular epidemiology, phylogenetics

Procedia PDF Downloads 265
2502 Transverse Momentum Dependent Factorization and Evolution for Spin Physics

Authors: Bipin Popat Sonawane

Abstract:

After 1988 Electron muon Collaboration (EMC) announcement of measurement of spin dependent structure function, it has been found that it has become a need to understand spin structure of a hadron. In the study of three-dimensional spin structure of a proton, we need to understand the foundation of quantum field theory in terms of electro-weak and strong theories using rigorous mathematical theories and models. In the process of understanding the inner dynamical stricture of proton we need understand the mathematical formalism in perturbative quantum chromodynamics (pQCD). In QCD processes like proton-proton collision at high energy we calculate cross section using conventional collinear factorization schemes. In this calculations, parton distribution functions (PDFs) and fragmentation function are used which provide the information about probability density of finding quarks and gluons ( partons) inside the proton and probability density of finding final hadronic state from initial partons. In transverse momentum dependent (TMD) PDFs and FFs, collectively called as TMDs, take an account for intrinsic transverse motion of partons. The TMD factorization in the calculation of cross sections provide a scheme of hadronic and partonic states in the given QCD process. In this study we review Transverse Momentum Dependent (TMD) factorization scheme using Collins-Soper-Sterman (CSS) Formalism. CSS formalism considers the transverse momentum dependence of the partons, in this formalism the cross section is written as a Fourier transform over a transverse position variable which has physical interpretation as impact parameter. Along with this we compare this formalism with improved CSS formalism. In this work we study the TMD evolution schemes and their comparison with other schemes. This would provide description in the process of measurement of transverse single spin asymmetry (TSSA) in hadro-production and electro-production of J/psi meson at RHIC, LHC, ILC energy scales. This would surely help us to understand J/psi production mechanism which is an appropriate test of QCD.

Keywords: QCD, PDF, TMD, CSS

Procedia PDF Downloads 50
2501 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement

Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad

Abstract:

An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.

Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter

Procedia PDF Downloads 386
2500 Comparison of Machine Learning and Deep Learning Algorithms for Automatic Classification of 80 Different Pollen Species

Authors: Endrick Barnacin, Jean-Luc Henry, Jimmy Nagau, Jack Molinie

Abstract:

Palynology is a field of interest in many disciplines due to its multiple applications: chronological dating, climatology, allergy treatment, and honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time consuming task that requires the intervention of experts in the field, which are becoming increasingly rare due to economic and social conditions. That is why the need for automation of this task is urgent. A lot of studies have investigated the subject using different standard image processing descriptors and sometimes hand-crafted ones.In this work, we make a comparative study between classical feature extraction methods (Shape, GLCM, LBP, and others) and Deep Learning (CNN, Autoencoders, Transfer Learning) to perform a recognition task over 80 regional pollen species. It has been found that the use of Transfer Learning seems to be more precise than the other approaches

Keywords: pollens identification, features extraction, pollens classification, automated palynology

Procedia PDF Downloads 120
2499 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 224
2498 Scalable Learning of Tree-Based Models on Sparsely Representable Data

Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou

Abstract:

Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.

Keywords: big data, sparsely representable data, tree-based models, scalable learning

Procedia PDF Downloads 247
2497 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 292
2496 A New Computational Package for Using in CFD and Other Problems (Third Edition)

Authors: Mohammad Reza Akhavan Khaleghi

Abstract:

This paper shows changes done to the Reduced Finite Element Method (RFEM) that its result will be the most powerful numerical method that has been proposed so far (some forms of this method are so powerful that they can approximate the most complex equations simply Laplace equation!). Finite Element Method (FEM) is a powerful numerical method that has been used successfully for the solution of the existing problems in various scientific and engineering fields such as its application in CFD. Many algorithms have been expressed based on FEM, but none have been used in popular CFD software. In this section, full monopoly is according to Finite Volume Method (FVM) due to better efficiency and adaptability with the physics of problems in comparison with FEM. It doesn't seem that FEM could compete with FVM unless it was fundamentally changed. This paper shows those changes and its result will be a powerful method that has much better performance in all subjects in comparison with FVM and another computational method. This method is not to compete with the finite volume method but to replace it.

Keywords: reduced finite element method, new computational package, new finite element formulation, new higher-order form, new isogeometric analysis

Procedia PDF Downloads 97
2495 Study on the Situation between France and the South China Sea from the Perspective of Balance of Power Theory

Authors: Zhenyi Chen

Abstract:

With the rise of China and the escalation of tension between China and the United States, European countries led by Great Britain, France, and Germany pay increasing attention to the regional situation in the Asia-Pacific (now known as "Indo-Pacific"). Among them, the South China Sea (SCS) is one of the main areas disputed by China, the United States, Southeast Asian countries and some European countries. Western countries are worried that the rise of China's military power will break the stability of the situation in SCS and alter the balance of power among major powers. Therefore, they tried to balance China's rise through alliance. In France's Indo-Pacific strategy, France aims to build a regional order with the alliance of France, India and Australia as the core, and regularly carry out military exercises targeting SCS with the United States, Japan and Southeast Asian countries. For China, the instability of the situation in SCS could also threaten the security of the southeast coastal areas and Taiwan, affect China's peaceful development process, and pose a threat to China's territorial sovereignty. This paper aims to study the activities and motivation of France in the South China Sea, and put the situation in SCS under the perspective of Balance of Power Theory, focusing on China, America and France. To be more specific, this paper will first briefly introduce Balance of Power Theory, then describe the new trends of France in recent years, followed with the analysis on the motivation of the increasing trend of France's involvement in SCS, and finally analyze the situation in SCS from the perspective of "balance of power" theory. It will be argued that great powers are carefully maintaining the balance of military power in SCS, and it is highly possible that this trend would still last in the middle and long term, particularly via military deployment and strategic alliances.

Keywords: South China Sea, France, China, balance of power theory, Indo-Pacific

Procedia PDF Downloads 160
2494 The Antecedents of Internet Addiction toward Smartphone Usage

Authors: Pui-Lai To, Chechen Liao, Hen-Yi Huang

Abstract:

Twenty years after Internet development, scholars have started to identify the negative impacts brought by the Internet. Overuse of Internet could develop Internet dependency and in turn cause addiction behavior. Therefore understanding the phenomenon of Internet addiction is important. With the joint efforts of experts and scholars, Internet addiction has been officially listed as a symptom that affects public health, and the diagnosis, causes and treatment of the symptom have also been explored. On the other hand, in the area of smartphone Internet usage, most studies are still focusing on the motivation factors of smartphone usage. Not much research has been done on smartphone Internet addiction. In view of the increasing adoption of smartphones, this paper is intended to find out whether smartphone Internet addiction exists in modern society or not. This study adopted the research methodology of online survey targeting users with smartphone Internet experience. A total of 434 effective samples were recovered. In terms of data analysis, Partial Least Square (PLS) in Structural Equation Modeling (SEM) is used for sample analysis and research model testing. Software chosen for statistical analysis is SPSS 20.0 for windows and SmartPLS 2.0. The research result successfully proved that smartphone users who access Internet service via smartphone could also develop smartphone Internet addiction. Factors including flow experience, depression, virtual social support, smartphone Internet affinity and maladaptive cognition all have significant and positive influence on smartphone Internet addiction. In the scenario of smartphone Internet use, descriptive norm has a positive and significant influence on perceived playfulness, while perceived playfulness also has a significant and positive influence on flow experience. Depression, on the other hand, is negatively influenced by actual social support and positive influenced by the virtual social support.

Keywords: internet addiction, smartphone usage, social support, perceived playfulness

Procedia PDF Downloads 226
2493 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks

Authors: Ameen Jameel Alawneh

Abstract:

A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.

Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets

Procedia PDF Downloads 376
2492 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver

Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar

Abstract:

Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.

Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy

Procedia PDF Downloads 179
2491 Cryptography and Cryptosystem a Panacea to Security Risk in Wireless Networking

Authors: Modesta E. Ezema, Chikwendu V. Alabekee, Victoria N. Ishiwu, Ifeyinwa NwosuArize, Chinedu I. Nwoye

Abstract:

The advent of wireless networking in computing technology cannot be overemphasized, it opened up easy accessibility to information resources, networking made easier and brought internet accessibility to our doorsteps, but despite all these, some mishap came in with it that is causing mayhem in today ‘s overall information security. The cyber criminals will always compromise the integrity of a message that is not encrypted or that is encrypted with a weak algorithm.In other to correct the mayhem, this study focuses on cryptosystem and cryptography. This ensures end to end crypt messaging. The study of various cryptographic algorithms, as well as the techniques and applications of the cryptography for efficiency, were all considered in the work., present and future applications of cryptography were dealt with as well as Quantum Cryptography was exposed as the current and the future area in the development of cryptography. An empirical study was conducted to collect data from network users.

Keywords: algorithm, cryptography, cryptosystem, network

Procedia PDF Downloads 329
2490 Malware Detection in Mobile Devices by Analyzing Sequences of System Calls

Authors: Jorge Maestre Vidal, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

With the increase in popularity of mobile devices, new and varied forms of malware have emerged. Consequently, the organizations for cyberdefense have echoed the need to deploy more effective defensive schemes adapted to the challenges posed by these recent monitoring environments. In order to contribute to their development, this paper presents a malware detection strategy for mobile devices based on sequence alignment algorithms. Unlike the previous proposals, only the system calls performed during the startup of applications are studied. In this way, it is possible to efficiently study in depth, the sequences of system calls executed by the applications just downloaded from app stores, and initialize them in a secure and isolated environment. As demonstrated in the performed experimentation, most of the analyzed malicious activities were successfully identified in their boot processes.

Keywords: android, information security, intrusion detection systems, malware, mobile devices

Procedia PDF Downloads 286
2489 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 103
2488 The Boundary Element Method in Excel for Teaching Vector Calculus and Simulation

Authors: Stephen Kirkup

Abstract:

This paper discusses the implementation of the boundary element method (BEM) on an Excel spreadsheet and how it can be used in teaching vector calculus and simulation. There are two separate spreadheets, within which Laplace equation is solved by the BEM in two dimensions (LIBEM2) and axisymmetric three dimensions (LBEMA). The main algorithms are implemented in the associated programming language within Excel, Visual Basic for Applications (VBA). The BEM only requires a boundary mesh and hence it is a relatively accessible method. The BEM in the open spreadsheet environment is demonstrated as being useful as an aid to teaching and learning. The application of the BEM implemented on a spreadsheet for educational purposes in introductory vector calculus and simulation is explored. The development of assignment work is discussed, and sample results from student work are given. The spreadsheets were found to be useful tools in developing the students’ understanding of vector calculus and in simulating heat conduction.

Keywords: boundary element method, Laplace’s equation, vector calculus, simulation, education

Procedia PDF Downloads 146
2487 Cohabitation, Ethnicities, and Tolerance: An Anthropologic Approach of Political Conflicts in Mozambique

Authors: Samuel Francisco Ngovene

Abstract:

Mozambique is a country with cultural segregation along its rivers, dividing the main ethnic groups of Machangana, Macena, and Macua, inter alia South, Centre, and North. This division has led to internal conflicts, seemingly rooted in ethnicity. The aim of this study is to analyze the tolerance of the main ethnic groups in Mozambique in terms of cohabitation, sharing opportunities, and political power. The study utilizes participant observation in the field, group discussions, and a questionnaire targeting 150 respondents split into 50 for each ethnic group. The study finds that people in Mozambique are generally tolerant of cohabiting or marrying individuals from different ethnic groups. However, when it comes to sharing opportunities such as employment or business, there is a perception that individuals from different ethnic groups may be taking away opportunities. Similarly, each ethnic group believes that having a president from their own group would lead to better opportunities for their community. The study highlights the importance of addressing this intolerance, as it can be a source of internal political conflicts. The anthropological approach provides a valuable tool for diplomacy channels to ensure long-lasting peace. Analysis procedures: The data collected through participant observation, group discussions are analytically crosschecked, comparing the opinions of people from different ethnic groups, while the data from the questionnaire are analyzed statistically to understand the level of tolerance among the ethnic groups and their perceptions of sharing opportunities and political power. The study addresses the question of whether the main ethnic groups in Mozambique are tolerant of cohabitation, sharing opportunities, and political power among themselves. The study concludes that while there is overall tolerance for cohabitation and marriage across ethnic groups, there is also a perception that individuals from different ethnic groups may take away opportunities. The study suggests that cultural education from a young age may be an effective way to promote tolerance.

Keywords: cohabitation, ethnicities, Mozambique, political conflicts, tolerance

Procedia PDF Downloads 34
2486 A Structure-Switching Electrochemical Aptasensor for Rapid, Reagentless and Single-Step, Nanomolar Detection of C-Reactive Protein

Authors: William L. Whitehouse, Louisa H. Y. Lo, Andrew B. Kinghorn, Simon C. C. Shiu, Julian. A. Tanner

Abstract:

C-reactive protein (CRP) is an acute-phase reactant and sensitive indicator for sepsis and other life-threatening pathologies, including systemic inflammatory response syndrome (SIRS). Currently, clinical turn-around times for established CRP detection methods take between 30 minutes to hours or even days from centralized laboratories. Here, we report the development of an electrochemical biosensor using redox probe-tagged DNA aptamers functionalized onto cheap, commercially available screen-printed electrodes. Binding-induced conformational switching of the CRP-targeting aptamer induces a specific and selective signal-ON event, which enables single-step and reagentless detection of CRP in as little as 1 minute. The aptasensor dynamic range spans 5-1000nM (R=0.97) or 5-500nM (R=0.99) in 50% diluted human serum, with a LOD of 3nM, corresponding to 2-orders of magnitude sensitivity under the clinically relevant cut-off for CRP. The sensor is stable for up to one week and can be reused numerous times, as judged from repeated real-time dosing and dose-response assays. By decoupling binding events from the signal induction mechanism, structure-switching electrochemical aptamer-based sensors (SS-EABs) provide considerable advantages over their adsorption-based counterparts. Our work expands on the retinue of such sensors reported in the literature and is the first instance of an SS-EAB for reagentless CRP detection. We hope this study can inspire further investigations into the suitability of SS-EABs for diagnostics, which will aid translational R&D toward fully realized devices aimed at point-of-care applications or for use more broadly by the public.

Keywords: structure-switching, C-reactive protein, electrochemical, biosensor, aptasensor.

Procedia PDF Downloads 47
2485 Nanopharmaceutical: A Comprehensive Appearance of Drug Delivery System

Authors: Mahsa Fathollahzadeh

Abstract:

The various nanoparticles employed in drug delivery applications include micelles, liposomes, solid lipid nanoparticles, polymeric nanoparticles, functionalized nanoparticles, nanocrystals, cyclodextrins, dendrimers, and nanotubes. Micelles, composed of amphiphilic block copolymers, can encapsulate hydrophobic molecules, allowing for targeted delivery. Liposomes, vesicular structures made up of phospholipids, can encapsulate both hydrophobic and hydrophilic molecules, providing a flexible platform for delivering therapeutic agents. Solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) are designed to improve the stability and bioavailability of lipophilic drugs. Polymeric nanoparticles, such as poly(lactic-co-glycolic acid) (PLGA), are biodegradable and can be engineered to release drugs in a controlled manner. Functionalized nanoparticles, coated with targeting ligands or antibodies, can specifically target diseased cells or tissues. Nanocrystals, engineered to have specific surface properties, can enhance the solubility and bioavailability of poorly soluble drugs. Cyclodextrins, doughnut-shaped molecules with hydrophobic cavities, can be complex with hydrophobic molecules, allowing for improved solubility and bioavailability. Dendrimers, branched polymers with a central core, can be designed to deliver multiple therapeutic agents simultaneously. Nanotubes and metallic nanoparticles, such as gold nanoparticles, offer real-time tracking capabilities and can be used to detect biomolecular interactions. The use of these nanoparticles has revolutionized the field of drug delivery, enabling targeted and controlled release of therapeutic agents, reduced toxicity, and improved patient outcomes.

Keywords: nanotechnology, nanopharmaceuticals, drug-delivery, proteins, ligands, nanoparticles, chemistry

Procedia PDF Downloads 24
2484 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks

Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf

Abstract:

Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.

Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks

Procedia PDF Downloads 144
2483 An Enhanced Particle Swarm Optimization Algorithm for Multiobjective Problems

Authors: Houda Abadlia, Nadia Smairi, Khaled Ghedira

Abstract:

Multiobjective Particle Swarm Optimization (MOPSO) has shown an effective performance for solving test functions and real-world optimization problems. However, this method has a premature convergence problem, which may lead to lack of diversity. In order to improve its performance, this paper presents a hybrid approach which embedded the MOPSO into the island model and integrated a local search technique, Variable Neighborhood Search, to enhance the diversity into the swarm. Experiments on two series of test functions have shown the effectiveness of the proposed approach. A comparison with other evolutionary algorithms shows that the proposed approach presented a good performance in solving multiobjective optimization problems.

Keywords: particle swarm optimization, migration, variable neighborhood search, multiobjective optimization

Procedia PDF Downloads 155
2482 Load Balancing Algorithms for SIP Server Clusters in Cloud Computing

Authors: Tanmay Raj, Vedika Gupta

Abstract:

For its groundbreaking and substantial power, cloud computing is today’s most popular breakthrough. It is a sort of Internet-based computing that allows users to request and receive numerous services in a cost-effective manner. Virtualization, grid computing, and utility computing are the most widely employed emerging technologies in cloud computing, making it the most powerful. However, cloud computing still has a number of key challenges, such as security, load balancing, and non-critical failure adaption, to name a few. The massive growth of cloud computing will put an undue strain on servers. As a result, network performance will deteriorate. A good load balancing adjustment can make cloud computing more productive and in- crease client fulfillment execution. Load balancing is an important part of cloud computing because it prevents certain nodes from being overwhelmed while others are idle or have little work to perform. Response time, cost, throughput, performance, and resource usage are all parameters that may be improved using load balancing.

Keywords: cloud computing, load balancing, computing, SIP server clusters

Procedia PDF Downloads 104