Search results for: trans-european transport network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6426

Search results for: trans-european transport network

966 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study

Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos

Abstract:

This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.

Keywords: in-place devices, IoT, human-centred data-analytics, spatial design

Procedia PDF Downloads 197
965 Soccer, a Major Social Changing Factor: Kosovo Case

Authors: Armend Kelmendi, Adnan Ahmeti

Abstract:

The purpose of our study was to assess the impact of soccer in the overall wealth fare (education, health, and economic prosperity) of youth in Kosovo (age: 7-18). The research conducted measured a number of parameters (training methodologies, conditions, community leadership impact) in a sample consisting of 6 different football clubs’ academies across the country. Fifty (50) male and female football youngsters volunteered in this study. To generate more reliable results, the analysis was conducted with the help of a set of effective project management tools and techniques (Gantt chart, Logic Network, PERT chart, Work Breakdown Structure, and Budgeting Analysis). The interviewees were interviewed under a specific lens of categories (impact in education, health, and economic prosperity). A set of questions were asked i.e. what has football provided to you and the community you live in?; Did football increase your confidence and shaped your life for better?; What was the main reason you started training in football? The results generated explain how a single sport, namely that of football in Kosovo can make a huge social change, improving key social factors in a society. There was a considerable difference between the youth clubs as far as training conditions are concerned. The study found out that despite financial constraints, two out of six clubs managed to produce twice as more talented players that were introduced to professional primary league teams in Kosovo and Albania, including other soccer teams in the region, Europe, and Asia. The study indicates that better sports policy must be formulated and associated with important financial investments in soccer for it to be considered fruitful and beneficial for players of 18 plus years of age, namely professionals.

Keywords: youth, prosperity, conditions, investments, growth, free movement

Procedia PDF Downloads 242
964 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 61
963 Upgrading of Bio-Oil by Bio-Pd Catalyst

Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood

Abstract:

This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.

Keywords: bio-oil, catalyst, palladium, upgrading

Procedia PDF Downloads 175
962 Ground Short Circuit Contributions of a MV Distribution Line Equipped with PWMSC

Authors: Mohamed Zellagui, Heba Ahmed Hassan

Abstract:

This paper proposes a new approach for the calculation of short-circuit parameters in the presence of Pulse Width Modulated based Series Compensator (PWMSC). PWMSC is a newly Flexible Alternating Current Transmission System (FACTS) device that can modulate the impedance of a transmission line through applying a variation to the duty cycle (D) of a train of pulses with fixed frequency. This results in an improvement of the system performance as it provides virtual compensation of distribution line impedance by injecting controllable apparent reactance in series with the distribution line. This controllable reactance can operate in both capacitive and inductive modes and this makes PWMSC highly effective in controlling the power flow and increasing system stability in the system. The purpose of this work is to study the impact of fault resistance (RF) which varies between 0 to 30 Ω on the fault current calculations in case of a ground fault and a fixed fault location. The case study is for a medium voltage (MV) Algerian distribution line which is compensated by PWMSC in the 30 kV Algerian distribution power network. The analysis is based on symmetrical components method which involves the calculations of symmetrical components of currents and voltages, without and with PWMSC in both cases of maximum and minimum duty cycle value for capacitive and inductive modes. The paper presents simulation results which are verified by the theoretical analysis.

Keywords: pulse width modulated series compensator (pwmsc), duty cycle, distribution line, short-circuit calculations, ground fault, symmetrical components method

Procedia PDF Downloads 500
961 Linking Adaptation to Climate Change and Sustainable Development: The Case of ClimAdaPT.Local in Portugal

Authors: A. F. Alves, L. Schmidt, J. Ferrao

Abstract:

Portugal is one of the more vulnerable European countries to the impacts of climate change. These include: temperature increase; coastal sea level rise; desertification and drought in the countryside; and frequent and intense extreme weather events. Hence, adaptation strategies to climate change are of great importance. This is what was addressed by ClimAdaPT.Local. This policy-oriented project had the main goal of developing 26 Municipal Adaptation Strategies for Climate Change, through the identification of local specific present and future vulnerabilities, the training of municipal officials, and the engagement of local communities. It is intended to be replicated throughout the whole territory and to stimulate the creation of a national network of local adaptation in Portugal. Supported by methodologies and tools specifically developed for this project, our paper is based on the surveys, training and stakeholder engagement workshops implemented at municipal level. In an 'adaptation-as-learning' process, these tools functioned as a social-learning platform and an exercise in knowledge and policy co-production. The results allowed us to explore the nature of local vulnerabilities and the exposure of gaps in the context of reappraisal of both future climate change adaptation opportunities and possible dysfunctionalities in the governance arrangements of municipal Portugal. Development issues are highlighted when we address the sectors and social groups that are both more sensitive and more vulnerable to the impacts of climate change. We argue that a pluralistic dialogue and a common framing can be established between them, with great potential for transformational adaptation. Observed climate change, present-day climate variability and future expectations of change are great societal challenges which should be understood in the context of the sustainable development agenda.

Keywords: adaptation, ClimAdaPT.Local, climate change, Portugal, sustainable development

Procedia PDF Downloads 196
960 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans

Authors: Jian Wu

Abstract:

Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.

Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory

Procedia PDF Downloads 186
959 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network

Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka

Abstract:

Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.

Keywords: aggregation, consumption, data gathering, efficiency

Procedia PDF Downloads 497
958 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 115
957 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 674
956 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 268
955 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 394
954 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India

Authors: Vinu Elias Jacob, Manoj Kumar Kini

Abstract:

Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.

Keywords: disaster management, resilience, spatial planning, spatial transformations

Procedia PDF Downloads 296
953 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 198
952 Security Report Profiling for Mobile Banking Applications in Indonesia Based on OWASP Mobile Top 10-2016

Authors: Bambang Novianto, Rizal Aditya Herdianto, Raphael Bianco Huwae, Afifah, Alfonso Brolin Sihite, Rudi Lumanto

Abstract:

The mobile banking application is a type of mobile application that is growing rapidly. This is caused by the ease of service and time savings in making transactions. On the other hand, this certainly provides a challenge in security issues. The use of mobile banking can not be separated from cyberattacks that may occur which can result the theft of sensitive information or financial loss. The financial loss and the theft of sensitive information is the most avoided thing because besides harming the user, it can also cause a loss of customer trust in a bank. Cyberattacks that are often carried out against mobile applications are phishing, hacking, theft, misuse of data, etc. Cyberattack can occur when a vulnerability is successfully exploited. OWASP mobile Top 10 has recorded as many as 10 vulnerabilities that are most commonly found in mobile applications. In the others, android permissions also have the potential to cause vulnerabilities. Therefore, an overview of the profile of the mobile banking application becomes an urgency that needs to be known. So that it is expected to be a consideration of the parties involved for improving security. In this study, an experiment has been conducted to capture the profile of the mobile banking applications in Indonesia based on android permission and OWASP mobile top 10 2016. The results show that there are six basic vulnerabilities based on OWASP Mobile Top 10 that are most commonly found in mobile banking applications in Indonesia, i.e. M1:Improper Platform Usage, M2:Insecure Data Storage, M3:Insecure Communication, M5:Insufficient Cryptography, M7:Client Code Quality, and M9:Reverse Engineering. The most permitted android permissions are the internet, status network access, and telephone read status.

Keywords: mobile banking application, OWASP mobile top 10 2016, android permission, sensitive information, financial loss

Procedia PDF Downloads 141
951 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”

Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani

Abstract:

The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.

Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density

Procedia PDF Downloads 124
950 Urine Neutrophil Gelatinase-Associated Lipocalin as an Early Marker of Acute Kidney Injury in Hematopoietic Stem Cell Transplantation Patients

Authors: Sara Ataei, Maryam Taghizadeh-Ghehi, Amir Sarayani, Asieh Ashouri, Amirhossein Moslehi, Molouk Hadjibabaie, Kheirollah Gholami

Abstract:

Background: Acute kidney injury (AKI) is common in hematopoietic stem cell transplantation (HSCT) patients with an incidence of 21–73%. Prevention and early diagnosis reduces the frequency and severity of this complication. Predictive biomarkers are of major importance to timely diagnosis. Neutrophil gelatinase associated lipocalin (NGAL) is a widely investigated novel biomarker for early diagnosis of AKI. However, no study assessed NGAL for AKI diagnosis in HSCT patients. Methods: We performed further analyses on gathered data from our recent trial to evaluate the performance of urine NGAL (uNGAL) as an indicator of AKI in 72 allogeneic HSCT patients. AKI diagnosis and severity were assessed using Risk–Injury–Failure–Loss–End-stage renal disease and AKI Network criteria. We assessed uNGAL on days -6, -3, +3, +9 and +15. Results: Time-dependent Cox regression analysis revealed a statistically significant relationship between uNGAL and AKI occurrence. (HR=1.04 (1.008-1.07), P=0.01). There was a relation between uNGAL day +9 to baseline ratio and incidence of AKI (unadjusted HR=.1.047(1.012-1.083), P<0.01). The area under the receiver-operating characteristic curve for day +9 to baseline ratio was 0.86 (0.74-0.99, P<0.01) and a cut-off value of 2.62 was 85% sensitive and 83% specific in predicting AKI. Conclusions: Our results indicated that increase in uNGAL augmented the risk of AKI and the changes of day +9 uNGAL concentrations from baseline could be of value for predicting AKI in HSCT patients. Additionally uNGAL changes preceded serum creatinine rises by nearly 2 days.

Keywords: acute kidney injury, hemtopoietic stem cell transplantation, neutrophil gelatinase-associated lipocalin, Receiver-operating characteristic curve

Procedia PDF Downloads 409
949 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS

Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana

Abstract:

Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).

Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI

Procedia PDF Downloads 113
948 Study of Structural Behavior and Proton Conductivity of Inorganic Gel Paste Electrolyte at Various Phosphorous to Silicon Ratio by Multiscale Modelling

Authors: P. Haldar, P. Ghosh, S. Ghoshdastidar, K. Kargupta

Abstract:

In polymer electrolyte membrane fuel cells (PEMFC), the membrane electrode assembly (MEA) is consisting of two platinum coated carbon electrodes, sandwiched with one proton conducting phosphoric acid doped polymeric membrane. Due to low mechanical stability, flooding and fuel cell crossover, application of phosphoric acid in polymeric membrane is very critical. Phosphorous and silica based 3D inorganic gel gains the attention in the field of supercapacitors, fuel cells and metal hydrate batteries due to its thermally stable highly proton conductive behavior. Also as a large amount of water molecule and phosphoric acid can easily get trapped in Si-O-Si network cavities, it causes a prevention in the leaching out. In this study, we have performed molecular dynamics (MD) simulation and first principle calculations to understand the structural, electronics and electrochemical and morphological behavior of this inorganic gel at various P to Si ratios. We have used dipole-dipole interactions, H bonding, and van der Waals forces to study the main interactions between the molecules. A 'structure property-performance' mapping is initiated to determine optimum P to Si ratio for best proton conductivity. We have performed the MD simulations at various temperature to understand the temperature dependency on proton conductivity. The observed results will propose a model which fits well with experimental data and other literature values. We have also studied the mechanism behind proton conductivity. And finally we have proposed a structure for the gel paste with optimum P to Si ratio.

Keywords: first principle calculation, molecular dynamics simulation, phosphorous and silica based 3D inorganic gel, polymer electrolyte membrane fuel cells, proton conductivity

Procedia PDF Downloads 129
947 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)

Procedia PDF Downloads 602
946 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity

Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz

Abstract:

The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.

Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance

Procedia PDF Downloads 108
945 Technical Sustainable Management: An Instrument to Increase Energy Efficiency in Wastewater Treatment Plants, a Case Study in Jordan

Authors: Dirk Winkler, Leon Koevener, Lamees AlHayary

Abstract:

This paper contributes to the improvement of the municipal wastewater systems in Jordan. An important goal is increased energy efficiency in wastewater treatment plants and therefore lower expenses due to reduced electricity consumption. The chosen way to achieve this goal is through the implementation of Technical Sustainable Management adapted to the Jordanian context. Three wastewater treatment plants in Jordan have been chosen as a case study for the investigation. These choices were supported by the fact that the three treatment plants are suitable for average performance and size. Beyond that, an energy assessment has been recently conducted in those facilities. The project succeeded in proving the following hypothesis: Energy efficiency in wastewater treatment plants can be improved by implementing principles of Technical Sustainable Management adapted to the Jordanian context. With this case study, a significant increase in energy efficiency can be achieved by optimization of operational performance, identifying and eliminating shortcomings and appropriate plant management. Implementing Technical Sustainable Management as a low-cost tool with a comparable little workload, provides several additional benefits supplementing increased energy efficiency, including compliance with all legal and technical requirements, process optimization, but also increased work safety and convenient working conditions. The research in the chosen field continues because there are indications for possible integration of the adapted tool into other regions and sectors. The concept of Technical Sustainable Management adapted to the Jordanian context could be extended to other wastewater treatment plants in all regions of Jordan but also into other sectors including water treatment, water distribution, wastewater network, desalination, or chemical industry.

Keywords: energy efficiency, quality management system, technical sustainable management, wastewater treatment

Procedia PDF Downloads 162
944 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids

Authors: Ayalew Yimam Ali

Abstract:

The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement

Procedia PDF Downloads 21
943 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 96
942 Comparison of Blockchain Ecosystem for Identity Management

Authors: K. S. Suganya, R. Nedunchezhian

Abstract:

In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.

Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity

Procedia PDF Downloads 130
941 Research on Intercity Travel Mode Choice Behavior Considering Traveler’s Heterogeneity and Psychological Latent Variables

Authors: Yue Huang, Hongcheng Gan

Abstract:

The new urbanization pattern has led to a rapid growth in demand for short-distance intercity travel, and the emergence of new travel modes has also increased the variety of intercity travel options. In previous studies on intercity travel mode choice behavior, the impact of functional amenities of travel mode and travelers’ long-term personality characteristics has rarely been considered, and empirical results have typically been calibrated using revealed preference (RP) or stated preference (SP) data. This study designed a questionnaire that combines the RP and SP experiment from the perspective of a trip chain combining inner-city and intercity mobility, with consideration for the actual condition of the Huainan-Hefei traffic corridor. On the basis of RP/SP fusion data, a hybrid choice model considering both random taste heterogeneity and psychological characteristics was established to investigate travelers’ mode choice behavior for traditional train, high-speed rail, intercity bus, private car, and intercity online car-hailing. The findings show that intercity time and cost exert the greatest influence on mode choice, with significant heterogeneity across the population. Although inner-city cost does not demonstrate a significant influence, inner-city time plays an important role. Service attributes of travel mode, such as catering and hygiene services, as well as free wireless network supply, only play a minor role in mode selection. Finally, our study demonstrates that safety-seeking tendency, hedonism, and introversion all have differential and significant effects on intercity travel mode choice.

Keywords: intercity travel mode choice, stated preference survey, hybrid choice model, RP/SP fusion data, psychological latent variable, heterogeneity

Procedia PDF Downloads 111
940 New Platform of Biobased Aromatic Building Blocks for Polymers

Authors: Sylvain Caillol, Maxence Fache, Bernard Boutevin

Abstract:

Recent years have witnessed an increasing demand on renewable resource-derived polymers owing to increasing environmental concern and restricted availability of petrochemical resources. Thus, a great deal of attention was paid to renewable resources-derived polymers and to thermosetting materials especially, since they are crosslinked polymers and thus cannot be recycled. Also, most of thermosetting materials contain aromatic monomers, able to confer high mechanical and thermal properties to the network. Therefore, the access to biobased, non-harmful, and available aromatic monomers is one of the main challenges of the years to come. Starting from phenols available in large volumes from renewable resources, our team designed platforms of chemicals usable for the synthesis of various polymers. One of these phenols, vanillin, which is readily available from lignin, was more specifically studied. Various aromatic building blocks bearing polymerizable functions were synthesized: epoxy, amine, acid, carbonate, alcohol etc. These vanillin-based monomers can potentially lead to numerous polymers. The example of epoxy thermosets was taken, as there is also the problematic of bisphenol A substitution for these polymers. Materials were prepared from the biobased epoxy monomers obtained from vanillin. Their thermo-mechanical properties were investigated and the effect of the monomer structure was discussed. The properties of the materials prepared were found to be comparable to the current industrial reference, indicating a potential replacement of petrosourced, bisphenol A-based epoxy thermosets by biosourced, vanillin-based ones. The tunability of the final properties was achieved through the choice of monomer and through a well-controlled oligomerization reaction of these monomers. This follows the same strategy than the one currently used in industry, which supports the potential of these vanillin-derived epoxy thermosets as substitutes of their petro-based counterparts.

Keywords: lignin, vanillin, epoxy, amine, carbonate

Procedia PDF Downloads 232
939 An Exploratory Study on 'Sub-Region Life Circle' in Chinese Big Cities Based on Human High-Probability Daily Activity: Characteristic and Formation Mechanism as a Case of Wuhan

Authors: Zhuoran Shan, Li Wan, Xianchun Zhang

Abstract:

With an increasing trend of regionalization and polycentricity in Chinese contemporary big cities, “sub-region life circle” turns to be an effective method on rational organization of urban function and spatial structure. By the method of questionnaire, network big data, route inversion on internet map, GIS spatial analysis and logistic regression, this article makes research on characteristic and formation mechanism of “sub-region life circle” based on human high-probability daily activity in Chinese big cities. Firstly, it shows that “sub-region life circle” has been a new general spatial sphere of residents' high-probability daily activity and mobility in China. Unlike the former analysis of the whole metropolitan or the micro community, “sub-region life circle” has its own characteristic on geographical sphere, functional element, spatial morphology and land distribution. Secondly, according to the analysis result with Binary Logistic Regression Model, the research also shows that seven factors including land-use mixed degree and bus station density impact the formation of “sub-region life circle” most, and then analyzes the index critical value of each factor. Finally, to establish a smarter “sub-region life circle”, this paper indicates that several strategies including jobs-housing fit, service cohesion and space reconstruction are the keys for its spatial organization optimization. This study expands the further understanding of cities' inner sub-region spatial structure based on human daily activity, and contributes to the theory of “life circle” in urban's meso-scale.

Keywords: sub-region life circle, characteristic, formation mechanism, human activity, spatial structure

Procedia PDF Downloads 300
938 Highly Conducting Ultra Nanocrystalline Diamond Nanowires Decorated ZnO Nanorods for Long Life Electronic Display and Photo-Detectors Applications

Authors: A. Saravanan, B. R. Huang, C. J. Yeh, K. C. Leou, I. N. Lin

Abstract:

A new class of ultra-nano diamond-graphite nano-hybrid (DGH) composite materials containing nano-sized diamond needles was developed at low temperature process. Such kind of diamond- graphite nano-hybrid composite nanowires exhibit high electrical conductivity and excellent electron field emission (EFE) properties. Few earlier reports mention that addition of N2 gas to the growth plasma requires high growth temperature (800°C) to trigger the dopants to generate the conductivity in the films. High growth temperature is not familiar with the Si-based device fabrications. We have used a novel process such as bias-enhanced-grown (beg) MPECVD process to grow diamond films at low substrate temperature (450°C). We observed that the beg-N/UNCD films thus obtained possess high conductivity of σ=987 S/cm, ever reported for diamond films with excellent Electron field emission (EFE) properties. TEM investigation indicated that these films contain needle-like diamond grains about 5 nm in diameter and hundreds of nanometers in length. Each of the grains was encased in graphitic layers about tens of nano-meters in thickness. These materials properties suitable for more specific applications, such as high conductivity for electron field emitters, high robustness for microplasma cathodes and high electrochemical activity for electro-chemical sensing. Subsequently, other hand, the highly conducting DGH films were coated on vertically aligned ZnO nanorods, there is no prior nucleation or seeding process needed due to the use of BEG method. Such a composite structure provides significant enhancement in the field emission characteristics of the cold cathode was observed with ultralow turn on voltage 1.78 V/μm with high EFE current density of 3.68 mA/ cm2 (at 4.06V/μm) due to decoration of DGH material on ZnO nanorods. The DGH/ZNRs based device get stable emission for longer duration of 562min than bare ZNRs (104min) without any current degradation because the diamond coating protects the ZNRs from ion bombardment when they are used as the cathode for microplasma devices. The potential application of these materials is demonstrated by the plasma illumination measurements that ignited the plasma at the minimum voltage by 290 V. The photoresponse (Iphoto/Idark) behavior of the DGH/ZNRs based photodetectors exhibits a much higher photoresponse (1202) than bare ZNRs (229). During the process the electron transport is easy from ZNRs to DGH through graphitic layers, the EFE properties of these materials comparable to other primarily used field emitters like carbon nanotubes, graphene. The DGH/ZNRs composite also providing a possibility of their use in flat panel, microplasma and vacuum microelectronic devices.

Keywords: bias-enhanced nucleation and growth, ZnO nanorods, electrical conductivity, electron field emission, photo-detectors

Procedia PDF Downloads 370
937 Effect of a GABA/5-HTP Mixture on Behavioral Changes and Biomodulation in an Invertebrate Model

Authors: Kyungae Jo, Eun Young Kim, Byungsoo Shin, Kwang Soon Shin, Hyung Joo Suh

Abstract:

Gamma-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) are amino acids of digested nutrients or food ingredients and these can possibly be utilized as non-pharmacologic treatment for sleep disorder. We previously investigated the GABA/5-HTP mixture is the principal concept of sleep-promoting and activity-repressing management in nervous system of D. melanogaster. Two experiments in this study were designed to evaluate sleep-promoting effect of GABA/5-HTP mixture, to clarify the possible ratio of sleep-promoting action in the Drosophila invertebrate model system. Behavioral assays were applied to investigate distance traveled, velocity, movement, mobility, turn angle, angular velocity and meander of two amino acids and GABA/5-HTP mixture with caffeine treated flies. In addition, differentially expressed gene (DEG) analyses from next generation sequencing (NGS) were applied to investigate the signaling pathway and functional interaction network of GABA/5-HTP mixture administration. GABA/5-HTP mixture resulted in significant differences between groups related to behavior (p < 0.01) and significantly induced locomotor activity in the awake model (p < 0.05). As a result of the sequencing, the molecular function of various genes has relationship with motor activity and biological regulation. These results showed that GABA/5-HTP mixture administration significantly involved the inhibition of motor behavior. In this regard, we successfully demonstrated that using a GABA/5-HTP mixture modulates locomotor activity to a greater extent than single administration of each amino acid, and that this modulation occurs via the neuronal system, neurotransmitter release cycle and transmission across chemical synapses.

Keywords: sleep, γ-aminobutyric acid, 5-hydroxytryptophan, Drosophila melanogaster

Procedia PDF Downloads 309