Search results for: urban computation.
364 Tracing Syrian Refugees Urban Mobilities: The Case of Egypt and Canada
Authors: N. Elgendy, N. Hussein
Abstract:
The current Syrian crisis has caused unprecedented practices of global mobility. The process of forced eviction and the resettlement of refugees could be seen through the insights of the “new mobilities paradigm”. The mobility of refugees in terms of meaning and practice is a subject that calls for further studies. There is a need for the development of an approach to human mobility to understand a practice that is turning into a phenomenon in the 21st century. This paper aims at studying, from a qualitative point of view, the process of movement within the six constituents of mobility defined as the first phase of the journey of a refugee. The second phase would include the process of settling in and re-defining the host country as new “home” to refugees. The change in the refugee state of mind and crossing the physical and mental borders from a “foreigner” to a citizen is encouraged by both the governmental policies and the local communities’ efforts to embrace these newcomers. The paper would focus on these policies of social and economic integration. The concept of integration connotes the idea that refugees would enjoy the opportunities, rights and services available to the citizens of the refugee’s new community. So, this paper examines this concept through showcasing the two hosting countries of Canada and Egypt, as they provide two contrasting situations in terms of cultural, geographical, economic and political backgrounds. The analysis would highlight the specific policies defined towards the refugees including the mass communication, media calls, and access to employment. This research is part of a qualitative research project on the process of Urban Mobility practiced by the Syrian Refugees, drawing on conversational interviews with new-settlers who have moved to the different hosting countries, from their home in Syria. It explores these immigrants’ practical and emotional relationships with the process of movement and settlement. It uses the conversational interviews as a tool to document analysis and draw relationships in an attempt to establish an understanding of the factors that contribute to the new-settlers feeling of home and integration within the new community.Keywords: Mobility, refugees, home, integration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301363 Relevance for Traditional Medicine in South Africa: Experiences of Urban Traditional Healers, Izinyanga
Authors: Ntokozo Mthembu
Abstract:
Access to relevant health indicates people’s likelihood of survival, including craft of indigenous healing and its related practitioners- izinyanga. However, the emergence of a dreaded novel corona virus - COVID-19 that has engulfed almost the whole world has necessitated the need to revisit the state of traditional healers in South Africa. This circumstance tended to expose the reality of social settings in various social structures and related policies including the manner coloniality reveal its ugly head when it comes treatment between western and African based therapeutic practices in this country. In attempting to gain a better understanding of such experiences, primary and secondary sources were consulted when collecting data that perusal of various literature in this instance including face-to-face interviews with traditional healers working on the street of Tshwane Municipality in South Africa. Preliminary findings revealed that the emergence of this deadly virus coincided with the moment when the government agenda was focussed on fulfilment of its promise of addressing the past inequity practices, including the transformation of medical sector. This scenario can be witnessed by the manner in which government and related agencies such as health department keeps on undermining indigenous healing practice irrespective of its historical record in terms of healing profession and fighting various diseases before times of father of medicine, Imhotep. Based on these preliminary findings, it is recommended that the government should hasten the incorporation of African knowledge systems especially medicine to offer alternatives and diverse to assess the underutilised indigenous African therapeutic approach and relevant skills that could be useful in combating ailments such as COVID 19. Perhaps, the plural medical systems should be recognized and related policies are formulated to guarantee mutual respect among citizens and the incorporation of healing practices in South African health sector, Africa and in the broader global community.Keywords: Indigenous healing practice, inyanga, COVID-19, therapeutic, urban, experience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 438362 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis
Authors: Isao Taguchi, Yasuo Sugai
Abstract:
This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430361 Multipath Routing Protocol Using Basic Reconstruction Routing (BRR) Algorithm in Wireless Sensor Network
Authors: K. Rajasekaran, Kannan Balasubramanian
Abstract:
A sensory network consists of multiple detection locations called sensor nodes, each of which is tiny, featherweight and portable. A single path routing protocols in wireless sensor network can lead to holes in the network, since only the nodes present in the single path is used for the data transmission. Apart from the advantages like reduced computation, complexity and resource utilization, there are some drawbacks like throughput, increased traffic load and delay in data delivery. Therefore, multipath routing protocols are preferred for WSN. Distributing the traffic among multiple paths increases the network lifetime. We propose a scheme, for the data to be transmitted through a dominant path to save energy. In order to obtain a high delivery ratio, a basic route reconstruction protocol is utilized to reconstruct the path whenever a failure is detected. A basic reconstruction routing (BRR) algorithm is proposed, in which a node can leap over path failure by using the already existing routing information from its neighbourhood while the composed data is transmitted from the source to the sink. In order to save the energy and attain high data delivery ratio, data is transmitted along a multiple path, which is achieved by BRR algorithm whenever a failure is detected. Further, the analysis of how the proposed protocol overcomes the drawback of the existing protocols is presented. The performance of our protocol is compared to AOMDV and energy efficient node-disjoint multipath routing protocol (EENDMRP). The system is implemented using NS-2.34. The simulation results show that the proposed protocol has high delivery ratio with low energy consumption.Keywords: Multipath routing, WSN, energy efficient routing, alternate route, assured data delivery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722360 A Traditional Settlement in a Modernized City: Yanbu, Saudi Arabia
Authors: Hisham Mortada
Abstract:
Transition in the urban configuration of Arab cities has never been as radical and visible as it has been since the turn of the last century. The emergence of new cities near historical settlements of Arabia has spawned a series of developments in and around the old city precincts. New developments are based on advanced technology and conform to globally prevalent standards of city planning, superseding the vernacular arrangements based on traditional norms that guided so-called ‘city planning’. Evidence to this fact are the extant Arab buildings present at the urban core of modern cities, which inform us about intricate spatial organization. Organization that subscribed to multiple norms such as, satisfying gender segregation and socialization, economic sustainability, and ensuring security and environmental coherence etc., within settlement compounds. Several participating factors achieved harmony in such an inclusive city—an organization that was challenged and apparently replaced by the new planning order in the face of growing needs of globalized, economy-centric and high-tech models of development. Communities found it difficult to acclimatize with the new western planning models that were implemented at a very large scale throughout the Kingdom, which later experienced spatial re-structuring to suit users’ needs. A closer look the ancient city of Yanbu, now flanked with such new developments, allows us to differentiate and track the beginnings of this unprecedented transition in settlement formations. This paper aims to elaborate the Arabian context offered to both the ‘traditional’ and ‘modern’ planning approaches, in order to understand challenges and solutions offered by both at different times. In the process it will also establish the inconsistencies and conflicts that arose with the shift in planning paradigm, from traditional-'cultural norms’, to modern-'physical planning', in the Arabian context. Thus, by distinguishing the two divergent planning philosophies, their impact of the Arabian morphology, relevance to lifestyle and suitability to the biophysical environment, it concludes with a perspective on sustainability particularly for in case of Yanbu.Keywords: Yanbu, traditional architecture, Hijaz, coral building, Saudi Arabia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606359 Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool
Authors: Florin Pop
Abstract:
Simulation is a very powerful method used for highperformance and high-quality design in distributed system, and now maybe the only one, considering the heterogeneity, complexity and cost of distributed systems. In Grid environments, foe example, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. In addition, Grid test-beds are limited and creating an adequately-sized test-bed is expensive and time consuming. Scalability, reliability and fault-tolerance become important requirements for distributed systems in order to support distributed computation. A distributed system with such characteristics is called dependable. Large environments, like Cloud, offer unique advantages, such as low cost, dependability and satisfy QoS for all users. Resource management in large environments address performant scheduling algorithm guided by QoS constrains. This paper presents the performance evaluation of scheduling heuristics guided by different optimization criteria. The algorithms for distributed scheduling are analyzed in order to satisfy users constrains considering in the same time independent capabilities of resources. This analysis acts like a profiling step for algorithm calibration. The performance evaluation is based on simulation. The simulator is MONARC, a powerful tool for large scale distributed systems simulation. The novelty of this paper consists in synthetic analysis results that offer guidelines for scheduler service configuration and sustain the empirical-based decision. The results could be used in decisions regarding optimizations to existing Grid DAG Scheduling and for selecting the proper algorithm for DAG scheduling in various actual situations.Keywords: Scheduling, Simulation, Performance Evaluation, QoS, Distributed Systems, MONARC
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752358 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem
Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães
Abstract:
This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.Keywords: Path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 898357 Online Web Service based Solution for Urban Traffic Management
Authors: A. Ionita, A. Zafiu, C. Ghita
Abstract:
In this article, we present a web server based solution for implementing a system for intelligent navigation. In this solution we use real time collected data and traffic history to establish the best route for navigation. This is a low cost solution that is easily to implement and extend. There is no need any infrastructure at road network level except only a device that collect data about traffic in key road crossing. The presented solution creates a strong base for traffic pursuit and offers an infrastructure for navigation applications.Keywords: navigation, real time, route, traffic pursuit, webservice.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583356 Neutronic Study of Two Reactor Cores Cooled with Light and Heavy Water Using Computation Method
Authors: Z. Gholamzadeh, A. Zali, S. A. H. Feghhi, C. Tenreiro, Y. Kadi, M. Rezazadeh, M. Aref
Abstract:
Most HWRs currently use natural uranium fuel. Using enriched uranium fuel results in a significant improvement in fuel cycle costs and uranium utilization. On the other hand, reactivity changes of HWRs over the full range of operating conditions from cold shutdown to full power are small. This reduces the required reactivity worth of control devices and minimizes local flux distribution perturbations, minimizing potential problems due to transient local overheating of fuel. Analyzing heavy water effectiveness on neutronic parameters such as enrichment requirements, peaking factor and reactivity is important and should pay attention as primary concepts of a HWR core designing. Two nuclear nuclear reactors of CANDU-type and hexagonal-type reactor cores of 33 fuel assemblies and 19 assemblies in 1.04 P/D have been respectively simulated using MCNP-4C code. Using heavy water and light water as moderator have been compared for achieving less reactivity insertion and enrichment requirements. Two fuel matrixes of (232Th/235U)O2 and (238/235U)O2 have been compared to achieve more economical and safe design. Heavy water not only decreased enrichment needs, but it concluded in negative reactivity insertions during moderator density variations. Thorium oxide fuel assemblies of 2.3% enrichment loaded into the core of heavy water moderator resulted in 0.751 fission to absorption ratio and peaking factor of 1.7 using. Heavy water not only provides negative reactivity insertion during temperature raises which changes moderator density but concluded in 2 to 10 kg reduction of enrichment requirements, depend on geometry type.
Keywords: MCNP-4C, Reactor core, Multiplication factor, Reactivity, Peaking factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844355 A Trends Analysis of Image Processing in Unmanned Aerial Vehicle
Authors: Jae-Neung Lee, Keun-Chang Kwak
Abstract:
This paper describes an analysis of domestic and international trends of image processing for data in UAV (unmanned aerial vehicle) and also explains about UAV and Quadcopter. Overseas examples of image processing using UAV include image processing for totaling the total numberof vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT(scale invariant features transform) matching, and application of median filter and thresholding. In Korea, many studies are underway including visualization of new urban buildings.
Keywords: Image Processing, UAV, Quadcopter, Target detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7675354 Quantitative Analysis of Nutrient Inflow from River and Groundwater to Imazu Bay in Fukuoka, Japan
Authors: Keisuke Konishi, Yoshinari Hiroshiro, Kento Terashima, Atsushi Tsutsumi
Abstract:
Imazu Bay plays an important role for endangered species such as horseshoe crabs and black-faced spoonbills that stay in the bay for spawning or the passing of winter. However, this bay is semi-enclosed with slow water exchange, which could lead to eutrophication under the condition of excess nutrient inflow to the bay. Therefore, quantification of nutrient inflow is of great importance. Generally, analysis of nutrient inflow to the bays takes into consideration nutrient inflow from only the river, but that from groundwater should not be ignored for more accurate results. The main objective of this study is to estimate the amounts of nutrient inflow from river and groundwater to Imazu Bay by analyzing water budget in Zuibaiji River Basin and loads of T-N, T-P, NO3-N and NH4-N. The water budget computation in the basin is performed using groundwater recharge model and quasi three-dimensional two-phase groundwater flow model, and the multiplication of the measured amount of nutrient inflow with the computed discharge gives the total amount of nutrient inflow to the bay. In addition, in order to evaluate nutrient inflow to the bay, the result is compared with nutrient inflow from geologically similar river basins. The result shows that the discharge is 3.50×107 m3/year from the river and 1.04×107 m3/year from groundwater. The submarine groundwater discharge accounts for approximately 23 % of the total discharge, which is large compared to the other river basins. It is also revealed that the total nutrient inflow is not particularly large. The sum of NO3-N and NH4-N loadings from groundwater is less than 10 % of that from the river because of denitrification in groundwater. The Shin Seibu Sewage Treatment Plant located below the observation points discharges treated water of 15,400 m3/day and plans to increase it. However, the loads of T-N and T-P from the treatment plant are 3.9 mg/L and 0.19 mg/L, so that it does not contribute a lot to eutrophication.Keywords: Eutrophication, groundwater recharge model, nutrient inflow, quasi three-dimensional two-phase groundwater flow model, Submarine groundwater discharge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1187353 CQAR: Closed Quarter Aerial Robot Design for Reconnaissance, Surveillance and Target Acquisition Tasks in Urban Areas
Authors: Paul Y. Oh, William E. Green
Abstract:
This paper describes a prototype aircraft that can fly slowly, safely and transmit wireless video for tasks like reconnaissance, surveillance and target acquisition. The aircraft is designed to fly in closed quarters like forests, buildings, caves and tunnels which are often spacious but GPS reception is poor. Envisioned is that a small, safe and slow flying vehicle can assist in performing dull, dangerous and dirty tasks like disaster mitigation, search-and-rescue and structural damage assessment.Keywords: Unmanned aerial vehicles, autonomous collisionavoidance, optic flow, near-Earth environments
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761352 Causes of Slum Emergence from Decently Built Government's Affordable Housing Projects in Enugu, Nigeria: The Experts’ Perspectives
Authors: Anthony Ikechukwu Agboeze, Walter Timo de Vries, Pamela Durán-Díaz
Abstract:
Since attaining urban status, the population of Enugu, Nigeria, has continued to grow rapidly, leading to growing demands for housing by the teeming population which is predominantly low income. Several government dispensations have developed various affordable housing projects to help deliver decent housing to the Enugu populace. However, over a long period of usage, some of those housing projects in Enugu are unabatedly deteriorating into slums alongside rising housing deficits which has remained problematic for most Nigerian urban centers to address. Emerging from a literature review, this research posits that the link between slum and affordable housing is that both the seekers of affordable housing and slum housing are the low-income earners. This research further investigated the possible causalities of slum emergence from decently built affordable housing projects in Enugu, Nigeria. To do so, we first analyzed the Nigerian housing policy to examine how the policy addresses slum prevention. We further conducted semi-structured expert interviews (qualitative) to sample the views of private housing developers on the degeneration of government housing projects into slums in Enugu, Nigeria. Findings from the housing policy analysis suggest that the housing policy itself is not legally binding on anybody to implement. Sequel to this non-compulsory nature of the housing policy is the poor/non-implementation of the Nigerian housing policy, leading to a constant tendency by the government developers (contractors) to deliver potential slums. The expert respondents corroborated this viewpoint by suggesting that poor planning (including designs of the housing units and the master plan) and poor management (including non-maintenance, poor documentation, and inaccurate housing inventory) are germane to the emergence of slums from affordable housings. This research recommends periodic auditing of delivered housing projects to evaluate the developers’ adherence to the housing policy guidelines – it proposes incentives to policy adherents since the housing policy is not legally binding. We also recommend a participatory management to engage the occupants in the monitoring and reporting of breakdowns in the housing properties – to help improve the quality of management and maintenance to have slum-free settlements.
Keywords: Affordable housing, Enugu, low income, Nigeria, slum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 353351 Lightweight and Seamless Distributed Scheme for the Smart Home
Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro
Abstract:
Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.Keywords: Authentication, key-session, security, wireless sensors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877350 SIPINA Induction Graph Method for Seismic Risk Prediction
Authors: B. Selma
Abstract:
The aim of this study is to test the feasibility of SIPINA method to predict the harmfulness parameters controlling the seismic response. The approach developed takes into consideration both the focal depth and the peak ground acceleration. The parameter to determine is displacement. The data used for the learning of this method and analysis nonlinear seismic are described and applied to a class of models damaged to some typical structures of the existing urban infrastructure of Jassy, Romania. The results obtained indicate an influence of the focal depth and the peak ground acceleration on the displacement.
Keywords: SIPINA method, seism, focal depth, peak ground acceleration, displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1210349 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.
Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4524348 A Study on Inference from Distance Variables in Hedonic Regression
Authors: Yan Wang, Yasushi Asami, Yukio Sadahiro
Abstract:
In urban area, several landmarks may affect housing price and rents, and hedonic analysis should employ distance variables corresponding to each landmarks. Unfortunately, the effects of distances to landmarks on housing prices are generally not consistent with the true price. These distance variables may cause magnitude error in regression, pointing a problem of spatial multicollinearity. In this paper, we provided some approaches for getting the samples with less bias and method on locating the specific sampling area to avoid the multicollinerity problem in two specific landmarks case.
Keywords: Landmarks, hedonic regression, distance variables, collinearity, multicollinerity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904347 Information Delivery and Advanced Traffic Information Systems in Istanbul
Authors: Kevser Simsek, Rahime Gunay
Abstract:
In this paper, we focused primarily on Istanbul data that is gathered by using intelligent transportation systems (ITS), and considered the developments in traffic information delivery and future applications that are being planned for implementation. Since traffic congestion is increasing and travel times are becoming less consistent and less predictable, traffic information delivery has become a critical issue. Considering the fuel consumption and wasted time in traffic, advanced traffic information systems are becoming increasingly valuable which enables travelers to plan their trips more accurately and easily.Keywords: Data Fusion, Istanbul, ITS, Real Time Information, Traffic Information, Travel Time, Urban Mobility
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042346 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.
Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420345 Sustainable Walkability and Place Identity
Authors: Marichela Sepe
Abstract:
The sustainability of a place depends on a series of factors which contribute to the quality of life, sense of place and recognition of identity. An activity like walking, which in itself is obviously ''sustainable'', can become non sustainable if the context in which it is carried out does not meet the conditions for an adequate quality of life. This work is aimed at proposing the analytical method of Place Maker to identify the elements that do not feature in traditional mapping and which constitute the contemporary identity of the places, and the relative complex map to represent those elements and support sustainable urban identity design. The method's potential for areas with a predominantly pedestrian vocation is illustrated by means of the case study of the Ramblas in Barcelona.Keywords: Place-identity, PlaceMaker, sustainability, walkability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2045344 Magnitude and Determinants of Overweight and Obesity among High School Adolescents in Addis Ababa, Ethiopia
Authors: Mulugeta Shegaze, Mekitie Wondafrash, Alemayehu A. Alemayehu, Shikur Mohammed, Zewdu Shewangezaw, Mukerem Abdo, Gebresilasea Gendisha
Abstract:
Background: The 2004 World Health Assembly called for specific actions to halt the overweight and obesity epidemic that is currently penetrating urban populations in the developing world. Adolescents require particular attention due to their vulnerability to develop obesity and the fact that adolescent weight tracks strongly into adulthood. However, there is scarcity of information on the modifiable risk factors to be targeted for primary intervention among urban adolescents in Ethiopia. This study was aimed at determining the magnitude and risk factors of overweight and obesity among high school adolescents in Addis Ababa. Methods: An institution-based cross-sectional study was conducted in February and March 2014 on 456 randomly selected adolescents from 20 high schools in Addis Ababa city. Demographic data and other risk factors of overweight and obesity were collected using self-administered structured questionnaire, whereas anthropometric measurements of weight and height were taken using calibrated equipment and standardized techniques. The WHO STEPS instrument for chronic disease risk was applied to assess dietary habit and physical activity. Overweight and obesity status was determined based on BMI-for-age percentiles of WHO 2007 reference population. Results: The prevalence rates of overweight, obesity, and overall overweight/ obesity among high school adolescents in Addis Ababa were 9.7% (95%CI = 6.9-12.4%), 4.2% (95%CI = 2.3-6.0%), and 13.9% (95%CI = 10.6-17.1%), respectively. Overweight/obesity prevalence was highest among female adolescents, in private schools, and in the higher wealth category. In multivariable regression model, being female [AOR(95%CI) = 5.4(2.5,12.1)], being from private school [AOR(95%CI) = 3.0(1.4,6.2)], having >3 regular meals [AOR(95%CI) = 4.0(1.3,13.0)], consumption of sweet foods [AOR(95%CI) = 5.0(2.4,10.3)] and spending >3 hours/day sitting [AOR(95%CI) = 3.5(1.7,7.2)] were found to increase overweight/ obesity risk, whereas high Total Physical Activity level [AOR(95%CI) = 0.21(0.08,0.57)] and better nutrition knowledge [AOR(95%CI) = 0.160.07,0.37)] were found protective. Conclusions: More than one in ten of the high school adolescents were affected by overweight/obesity with dietary habit and physical activity are important modifiable risk factors. Well-tailored nutrition education program targeting lifestyle change should be initiated with more emphasis to female adolescents and students in private schools.Keywords: Adolescents, NCDs, overweight, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2594343 Meta Model Based EA for Complex Optimization
Authors: Maumita Bhattacharya
Abstract:
Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiencyKeywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067342 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kr. Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.
Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430341 Mobility Analysis of the Population of Rabat-Salé-Zemmour-Zaer
Authors: F. Ghaiti
Abstract:
In this paper, we present the 2006 survey study origin destination and price that we carried out during 2006 fall in the area in the Moroccan region of Rabat-Salé-Zemmour-Zaer. The survey concerns the people-s characteristics, their displacements behavior and the price that they will be able to pay for a tramway ticket. The main objective is to study a set of relative features to the households and to their displacement's habits and to their choices among public and privet transport modes. A comparison between this survey results and that of the 1996's is made. A pricing scheme is also given according to the tram capacity. (The Rabat-Salé tramway is under construction right now and it will be operational beginning 2010).
Keywords: Matrix O/D, Theory of pricing, Urban transport survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641340 Water Quality from a Mixed Land-Use Catchment in Miri, Sarawak
Authors: Carrie Ho, Darshana J. Kumar
Abstract:
Urbanization has been found to impact stormwater runoff quantity and quality. A study catchment with mixed land use, residential and industrial were investigated and the water quality discharged from the catchment were sampled and tested for four basic water quality parameters; BOD5, NH3-N, NO3-N and P. One dry weather flow and several stormwater runoff were sampled. Results were compared to the USEPA stormwater quality benchmark values and the Interim National Water Quality Standards for Malaysia (INWQS). The concentration of the parameters was found to vary significantly between storms and the pollutant of concern was found to be NO3-N.Keywords: Mixed land-use, urban runoff, water quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2734339 Back Analysis of Tehran Metro Tunnel Construction Using FLAC-3D
Authors: M. Mahdi, N. Shariatmadari
Abstract:
An important aspect of planning for shallow tunneling under urban areas is the determination of likely surface movements and interaction with existing structures. Back analysis of built tunnels that their settlements magnitude is available, could aid the designers to have a more accuracy in future projects.
In this paper, one single Tehran Metro Tunnel (at west of Hor square, Jang University Street) was selected. At first, surface settlements of this tunnel were measured in situ. Then this tunnel was modeled using the commercial finite deference software FLAC-3D. Finally, Results of modeling and in situ measurements compared for verification.
Keywords: Shallow Tunnel, Back Analysis, Surface Movement, Numerical Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3817338 A Novel GNSS Integrity Augmentation System for Civil and Military Aircraft
Authors: Roberto Sabatini, Terry Moore, Chris Hill
Abstract:
This paper presents a novel Global Navigation Satellite System (GNSS) Avionics Based Integrity Augmentation (ABIA) system architecture suitable for civil and military air platforms, including Unmanned Aircraft Systems (UAS). Taking the move from previous research on high-accuracy Differential GNSS (DGNSS) systems design, integration and experimental flight test activities conducted at the Italian Air Force Flight Test Centre (CSV-RSV), our research focused on the development of a novel approach to the problem of GNSS ABIA for mission- and safety-critical air vehicle applications and for multi-sensor avionics architectures based on GNSS. Detailed mathematical models were developed to describe the main causes of GNSS signal outages and degradation in flight, namely: antenna obscuration, multipath, fading due to adverse geometry and Doppler shift. Adopting these models in association with suitable integrity thresholds and guidance algorithms, the ABIA system is able to generate integrity cautions (predictive flags) and warnings (reactive flags), as well as providing steering information to the pilot and electronic commands to the aircraft/UAS flight control systems. These features allow real-time avoidance of safety-critical flight conditions and fast recovery of the required navigation performance in case of GNSS data losses. In other words, this novel ABIA system addresses all three cornerstones of GNSS integrity augmentation in mission- and safety-critical applications: prediction (caution flags), reaction (warning flags) and correction (alternate flight path computation).
Keywords: Global Navigation Satellite Systems (GNSS), Integrity Augmentation, Unmanned Aircraft Systems, Aircraft Based Augmentation, Avionics Based Integrity Augmentation, Safety-Critical Applications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3244337 Proposal of Data Collection from Probes
Authors: M. Kebisek, L. Spendla, M. Kopcek, T. Skulavik
Abstract:
In our paper we describe the security capabilities of data collection. Data are collected with probes located in the near and distant surroundings of the company. Considering the numerous obstacles e.g. forests, hills, urban areas, the data collection is realized in several ways. The collection of data uses connection via wireless communication, LAN network, GSM network and in certain areas data are collected by using vehicles. In order to ensure the connection to the server most of the probes have ability to communicate in several ways. Collected data are archived and subsequently used in supervisory applications. To ensure the collection of the required data, it is necessary to propose algorithms that will allow the probes to select suitable communication channel.
Keywords: Communication, computer network, data collection, probe.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782336 Critical Assessment of Scoring Schemes for Protein-Protein Docking Predictions
Authors: Dhananjay C. Joshi, Jung-Hsin Lin
Abstract:
Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.
Keywords: protein-protein docking, protein-protein interaction, molecular mechanics energetics, Poisson-Boltzmann calculations
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805335 Computation and Validation of the Stress Distribution around a Circular Hole in a Slab Undergoing Plastic Deformation
Authors: S. D. El Wakil, J. Rice
Abstract:
The aim of the current work was to employ the finite element method to model a slab, with a small hole across its width, undergoing plastic plane strain deformation. The computational model had, however, to be validated by comparing its results with those obtained experimentally. Since they were in good agreement, the finite element method can therefore be considered a reliable tool that can help gain better understanding of the mechanism of ductile failure in structural members having stress raisers. The finite element software used was ANSYS, and the PLANE183 element was utilized. It is a higher order 2-D, 8-node or 6-node element with quadratic displacement behavior. A bilinear stress-strain relationship was used to define the material properties, with constants similar to those of the material used in the experimental study. The model was run for several tensile loads in order to observe the progression of the plastic deformation region, and the stress concentration factor was determined in each case. The experimental study involved employing the visioplasticity technique, where a circular mesh (each circle was 0.5 mm in diameter, with 0.05 mm line thickness) was initially printed on the side of an aluminum slab having a small hole across its width. Tensile loading was then applied to produce a small increment of plastic deformation. Circles in the plastic region became ellipses, where the directions of the principal strains and stresses coincided with the major and minor axes of the ellipses. Next, we were able to determine the directions of the maximum and minimum shear stresses at the center of each ellipse, and the slip-line field was then constructed. We were then able to determine the stress at any point in the plastic deformation zone, and hence the stress concentration factor. The experimental results were found to be in good agreement with the analytical ones.Keywords: Finite element method to model a slab, slab undergoing plastic deformation, stress distribution around a circular hole, visioplasticity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2103