Search results for: inter-datacenters transport network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6366

Search results for: inter-datacenters transport network

936 New Platform of Biobased Aromatic Building Blocks for Polymers

Authors: Sylvain Caillol, Maxence Fache, Bernard Boutevin

Abstract:

Recent years have witnessed an increasing demand on renewable resource-derived polymers owing to increasing environmental concern and restricted availability of petrochemical resources. Thus, a great deal of attention was paid to renewable resources-derived polymers and to thermosetting materials especially, since they are crosslinked polymers and thus cannot be recycled. Also, most of thermosetting materials contain aromatic monomers, able to confer high mechanical and thermal properties to the network. Therefore, the access to biobased, non-harmful, and available aromatic monomers is one of the main challenges of the years to come. Starting from phenols available in large volumes from renewable resources, our team designed platforms of chemicals usable for the synthesis of various polymers. One of these phenols, vanillin, which is readily available from lignin, was more specifically studied. Various aromatic building blocks bearing polymerizable functions were synthesized: epoxy, amine, acid, carbonate, alcohol etc. These vanillin-based monomers can potentially lead to numerous polymers. The example of epoxy thermosets was taken, as there is also the problematic of bisphenol A substitution for these polymers. Materials were prepared from the biobased epoxy monomers obtained from vanillin. Their thermo-mechanical properties were investigated and the effect of the monomer structure was discussed. The properties of the materials prepared were found to be comparable to the current industrial reference, indicating a potential replacement of petrosourced, bisphenol A-based epoxy thermosets by biosourced, vanillin-based ones. The tunability of the final properties was achieved through the choice of monomer and through a well-controlled oligomerization reaction of these monomers. This follows the same strategy than the one currently used in industry, which supports the potential of these vanillin-derived epoxy thermosets as substitutes of their petro-based counterparts.

Keywords: lignin, vanillin, epoxy, amine, carbonate

Procedia PDF Downloads 230
935 An Exploratory Study on 'Sub-Region Life Circle' in Chinese Big Cities Based on Human High-Probability Daily Activity: Characteristic and Formation Mechanism as a Case of Wuhan

Authors: Zhuoran Shan, Li Wan, Xianchun Zhang

Abstract:

With an increasing trend of regionalization and polycentricity in Chinese contemporary big cities, “sub-region life circle” turns to be an effective method on rational organization of urban function and spatial structure. By the method of questionnaire, network big data, route inversion on internet map, GIS spatial analysis and logistic regression, this article makes research on characteristic and formation mechanism of “sub-region life circle” based on human high-probability daily activity in Chinese big cities. Firstly, it shows that “sub-region life circle” has been a new general spatial sphere of residents' high-probability daily activity and mobility in China. Unlike the former analysis of the whole metropolitan or the micro community, “sub-region life circle” has its own characteristic on geographical sphere, functional element, spatial morphology and land distribution. Secondly, according to the analysis result with Binary Logistic Regression Model, the research also shows that seven factors including land-use mixed degree and bus station density impact the formation of “sub-region life circle” most, and then analyzes the index critical value of each factor. Finally, to establish a smarter “sub-region life circle”, this paper indicates that several strategies including jobs-housing fit, service cohesion and space reconstruction are the keys for its spatial organization optimization. This study expands the further understanding of cities' inner sub-region spatial structure based on human daily activity, and contributes to the theory of “life circle” in urban's meso-scale.

Keywords: sub-region life circle, characteristic, formation mechanism, human activity, spatial structure

Procedia PDF Downloads 295
934 Effect of a GABA/5-HTP Mixture on Behavioral Changes and Biomodulation in an Invertebrate Model

Authors: Kyungae Jo, Eun Young Kim, Byungsoo Shin, Kwang Soon Shin, Hyung Joo Suh

Abstract:

Gamma-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) are amino acids of digested nutrients or food ingredients and these can possibly be utilized as non-pharmacologic treatment for sleep disorder. We previously investigated the GABA/5-HTP mixture is the principal concept of sleep-promoting and activity-repressing management in nervous system of D. melanogaster. Two experiments in this study were designed to evaluate sleep-promoting effect of GABA/5-HTP mixture, to clarify the possible ratio of sleep-promoting action in the Drosophila invertebrate model system. Behavioral assays were applied to investigate distance traveled, velocity, movement, mobility, turn angle, angular velocity and meander of two amino acids and GABA/5-HTP mixture with caffeine treated flies. In addition, differentially expressed gene (DEG) analyses from next generation sequencing (NGS) were applied to investigate the signaling pathway and functional interaction network of GABA/5-HTP mixture administration. GABA/5-HTP mixture resulted in significant differences between groups related to behavior (p < 0.01) and significantly induced locomotor activity in the awake model (p < 0.05). As a result of the sequencing, the molecular function of various genes has relationship with motor activity and biological regulation. These results showed that GABA/5-HTP mixture administration significantly involved the inhibition of motor behavior. In this regard, we successfully demonstrated that using a GABA/5-HTP mixture modulates locomotor activity to a greater extent than single administration of each amino acid, and that this modulation occurs via the neuronal system, neurotransmitter release cycle and transmission across chemical synapses.

Keywords: sleep, γ-aminobutyric acid, 5-hydroxytryptophan, Drosophila melanogaster

Procedia PDF Downloads 305
933 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 362
932 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids

Authors: Ayalew Yimam Ali

Abstract:

The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.

Procedia PDF Downloads 10
931 An Assessment into the Drift in Direction of International Migration of Labor: Changing Aspirations for Religiosity and Cultural Assimilation

Authors: Syed Toqueer Akhter, Rabia Zulfiqar

Abstract:

This paper attempts to trace the determining factor- as far as individual preferences and expectations are concerned- of what causes the direction of international migration to drift in certain ways owing to factors such as Religiosity and Cultural Assimilation. The narrative on migration has graduated from the age long ‘push/pull’ debate to that of complex factors that may vary across each individual. We explore the longstanding factor of religiosity widely acknowledged in mentioned literature as a key variable in the assessment of migration, wherein the impact of religiosity in the form of a drift into the intent of migration has been analyzed. A more conventional factor cultural assimilation is used in a contemporary way to estimate how it plays a role in affecting the drift in direction. In particular what our research aims at achieving is to isolate the effect our key variables: Cultural Assimilation and Religiosity have on direction of migration, and to explore how they interplay as a composite unit- and how we may be able to justify the change in behavior displayed by these key variables. In order to establish a true sense of what drives individual choices we employ the method of survey research and use a questionnaire to conduct primary research. The questionnaire was divided into six sections covering subjects including household characteristics, perceptions and inclinations of the respondents relevant to our study. Religiosity was quantified using a proxy of Migration Network that utilized secondary data to estimate religious hubs in recipient countries. To estimate the relationship between Intent of Migration and its variants three competing econometric models namely: the Ordered Probit Model, the Ordered Logit Model and the Tobit Model were employed. For every model that included our key variables, a highly significant relationship with the intent of migration was estimated.

Keywords: international migration, drift in direction, cultural assimilation, religiosity, ordered probit model

Procedia PDF Downloads 304
930 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem

Authors: Renata Kurpiewska-Korbut

Abstract:

Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.

Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine

Procedia PDF Downloads 90
929 Factors Affecting Entrepreneurial Behavior and Performance of Youth Entrepreneurs in Malaysia

Authors: Mohd Najib Mansor, Nur Syamilah Md. Noor, Abdul Rahim Anuar, Shazida Jan Mohd Khan, Ahmad Zubir Ibrahim, Badariah Hj Din, Abu Sufian Abu Bakar, Kalsom Kayat, Wan Nurmahfuzah Jannah Wan Mansor

Abstract:

This study aimed and focused on the behavior of youth entrepreneurs’ especially entrepreneurial self-efficacy and the performance in micro SMEs in Malaysia. Entrepreneurship development calls for support from various quarters, and mostly the need exists to initiate a youth entrepreneurship culture and drive amongst the youth in the society. Although backed up by the government and non-government organizations, micro-entrepreneurs are still facing challenges which greatly delay their progress, growth and consequently their input towards economic advancement. Micro-entrepreneurs are confronted with unique difficulties such as uncertainty, innovation, and evolution. Reviews on the development of entrepreneurial characteristics such as need for achievement, internal locus of control, risk-taking and innovation and have been recognized as highly associated with entrepreneurial behavior. The data in this study was obtained from the Department of Statistics, Malaysia. A random sampling of 830 respondents was distributed to 14 states that involve of micro-entrepreneurs. The study adopted a quantitative approach whereby a set of questionnaire was used to gather data. Multiple regression analysis was chosen as a method of analysis testing. The result of this study is expected to provide insight into the factor affecting entrepreneurial behavior and performance of youth entrepreneurs in micro SMEs. The finding showed that the Malaysian youth entrepreneurs do not have the entrepreneurial self-efficacy within themselves in order to accomplish greater success in their business venture. The establishment of entrepreneurial schools to allow our youth to be exposed to entrepreneurship from an early age and the development of special training focuses on the creation of business network so that the continuous entrepreneurial culture is crafted.

Keywords: youth entrepreneurs, micro entrepreneurs, entrepreneurial self-efficacy, entrepreneurial performance

Procedia PDF Downloads 296
928 Derivation of a Risk-Based Level of Service Index for Surface Street Network Using Reliability Analysis

Authors: Chang-Jen Lan

Abstract:

Current Level of Service (LOS) index adopted in Highway Capacity Manual (HCM) for signalized intersections on surface streets is based on the intersection average delay. The delay thresholds for defining LOS grades are subjective and is unrelated to critical traffic condition. For example, an intersection delay of 80 sec per vehicle for failing LOS grade F does not necessarily correspond to the intersection capacity. Also, a specific measure of average delay may result from delay minimization, delay equality, or other meaningful optimization criteria. To that end, a reliability version of the intersection critical degree of saturation (v/c) as the LOS index is introduced. Traditionally, the level of saturation at a signalized intersection is defined as the ratio of critical volume sum (per lane) to the average saturation flow (per lane) during all available effective green time within a cycle. The critical sum is the sum of the maximal conflicting movement-pair volumes in northbound-southbound and eastbound/westbound right of ways. In this study, both movement volume and saturation flow are assumed log-normal distributions. Because, when the conditions of central limit theorem obtain, multiplication of the independent, positive random variables tends to result in a log-normal distributed outcome in the limit, the critical degree of saturation is expected to be a log-normal distribution as well. Derivation of the risk index predictive limits is complex due to the maximum and absolute value operators, as well as the ratio of random variables. A fairly accurate functional form for the predictive limit at a user-specified significant level is yielded. The predictive limit is then compared with the designated LOS thresholds for the intersection critical degree of saturation (denoted as X

Keywords: reliability analysis, level of service, intersection critical degree of saturation, risk based index

Procedia PDF Downloads 128
927 The Role of Social Networks in Promoting Ethics in Iranian Sports

Authors: Tayebeh Jameh-Bozorgi, M. Soleymani

Abstract:

In this research, the role of social networks in promoting ethics in Iranian sports was investigated. The research adopted a descriptive-analytic method, and the survey’s population consisted of all the athletes invited to the national football, volleyball, wrestling and taekwondo teams. Considering the limited population, the size of the society was considered as the sample size. After the distribution of the questionnaires, 167 respondents answered the questionnaires correctly. The data collection tool was chosen according to Hamid Ghasemi`s, standard questionnaire for social networking and mass media, which has 28 questions. Reliability of the questionnaire was calculated using Cronbach's alpha coefficient (94%). The content validity of the questionnaire was also approved by the professors. In this study, descriptive statistics and inferential statistical methods were used to analyze the data using statistical software. The benchmark tests used in this research included the following: Binomial test, Friedman test, Spearman correlation coefficient, Vermont Creamers, Good fit test and comparative prototypes. The results showed that athletes believed that social network has a significant role in promoting sport ethics in the community. Telegram has been known to play a big role than other social networks. Moreover, the respondents' view on the role of social networks in promoting sport ethics was significantly different in both men and women groups. In fact, women had a more positive attitude towards the role of social networks in promoting sport ethics than men. The respondents' view of the role of social networks in promoting the ethics of sports in the study groups also had a significant difference. Additionally, there was a significant and reverse relationship between the sports experience and the attitude of national athletes regarding the role of social networks in promoting ethics in sports.

Keywords: ethics, social networks, mass media, Iranian sports, internet

Procedia PDF Downloads 285
926 Design and Optimization of a Small Hydraulic Propeller Turbine

Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink

Abstract:

A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.

Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design

Procedia PDF Downloads 146
925 Auto Calibration and Optimization of Large-Scale Water Resources Systems

Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari

Abstract:

Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.

Keywords: auto-calibration, Gilan, large-scale water resources, simulation

Procedia PDF Downloads 332
924 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections

Authors: Anthony D. Rhodes, Manan Goel

Abstract:

We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.

Keywords: computer vision, object segmentation, interactive segmentation, model compression

Procedia PDF Downloads 117
923 Upgrading of Bio-Oil by Bio-Pd Catalyst

Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood

Abstract:

This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.

Keywords: bio-oil, catalyst, palladium, upgrading

Procedia PDF Downloads 172
922 Co-Operation in Hungarian Agriculture

Authors: Eszter Hamza

Abstract:

The competitiveness of economic operators is based on interoperability, which is relatively low in Hungary. The development of co-operation is high priority in Common Agricultural Policy 2014-2020. The aim of the paper to assess co-operations in Hungarian agriculture, estimate the economic outputs and benefits of co-operations, based on statistical data processing and literature. Further objective is to explore the potential of agricultural co-operation with the help of interviews and questionnaire survey. The research seeks to answer questions as to what fundamental factors play role in the development of co-operation, and what are the motivations of the actors and the key success factors and pitfalls. The results were analysed using econometric methods. In Hungarian agriculture we can find several forms of co-operation: cooperatives, producer groups (PG) and producer organizations (PO), machinery cooperatives, integrator companies, product boards and interbranch organisations. Despite the several appearance of the agricultural co-operation, their economic weight is significantly lower in Hungary than in western European countries. Considering the agricultural importance, the integrator companies represent the most weight among the co-operations forms. Hungarian farmers linked to co-operations or organizations mostly in relation to procurement and sales. Less than 30 percent of surveyed farmers are members of a producer organization or cooperative. The trust level is low among farmers. The main obstacle to the development of formalized co-operation, is producers' risk aversion and the black economy in agriculture. Producers often prefer informal co-operation instead of long-term contractual relationships. The Hungarian agricultural co-operations are characterized by non-dynamic development, but slow qualitative change. For the future, one breakout point could be the association of producer groups and organizations, which in addition to the benefits of market concentration, in the dissemination of knowledge, advisory network operation and innovation can act more effectively.

Keywords: agriculture, co-operation, producer organisation, trust level

Procedia PDF Downloads 393
921 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans

Authors: Jian Wu

Abstract:

Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.

Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory

Procedia PDF Downloads 182
920 Functional Connectivity Signatures of Polygenic Depression Risk in Youth

Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip

Abstract:

Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.

Keywords: genetics, functional connectivity, pre-adolescents, depression

Procedia PDF Downloads 54
919 Improvement of Resistance Features of Anti- Mic Polyaspartic Coating (DTM) Using Nano Silver Particles by Preventing Biofilm Formation

Authors: Arezoo Assarian, Reza Javaherdashti

Abstract:

Microbiologically influenced corrosion (MIC) is an electrochemical process that can affect both metals and non-metals. The cost of MIC can amount to 40% of the cost of corrosion. MIC is enhanced via factors such as but not limited to the presence of certain bacteria and archaea as well as mechanisms such as external electron transfer. There are five methods by which electrochemical corrosion, including MIC, can be prevented, of which coatings are an effective method due to blinding anode, cathode and, electrolyte from each other. Conventional ordinary coatings may themselves become nutrient sources for the bacteria and therefore show low efficiency in dealing with MIC. Recently our works on polyaspartic coating (DTM) have shown promising results, therefore nominating DTM as the most appropriate coating material to manage both MIC and general electrochemical corrosion very efficiently. Nanosilver particles are known for their antimicrobial properties that make them of desirable distractive impacts on any germs. This coating will be formulated based on Nanosilver phosphate and copper II oxide in the resin network and co-reactant. The nanoparticles are light and heat-sensitive agents. The method which is used to keep nanoparticles in the film coating is the encapsulation of active ingredients. By this method, it will prevent incompatibility between different particles. For producing microcapsules, the interfacial cross-linking method will be used. This is achieved by adding an active ingredient to an aqueous solution of the cross-linkable polymer. In this paper, we will first explain the role of coating materials in controlling and preventing electrochemical corrosion. We will explain MIC and some of its fundamental principles, such as bacteria establishment (biofilm) and the role they play in enhancing corrosion via mechanisms such as the establishment of differential aeration cells. Later we will explain features of DTM coatings that highly contribute to preventing biofilm formation and thus microbial corrosion.

Keywords: biofilm, corrosion, microbiologically influenced corrosion(MIC), nanosilver particles, polyaspartic coating (DTM)

Procedia PDF Downloads 163
918 The Role of Social Capital and Dynamic Capabilities in a Circular Economy: Evidence from German Small and Medium-Sized Enterprises

Authors: Antonia Hoffmann, Andrea Stübner

Abstract:

Resource scarcity and rising material prices are forcing companies to rethink their business models. The conventional linear system of economic growth and rising social needs further exacerbates the problem of resource scarcity. Therefore, it is necessary to separate economic growth from resource consumption. This can be achieved through the circular economy (CE), which focuses on sustainable product life cycles. However, companies face challenges in implementing CE into their businesses. Small and medium-sized enterprises are particularly affected by these problems, as they have a limited resource base. Collaboration and social interaction between different actors can help to overcome these obstacles. Based on a self-generated sample of 1,023 German small and medium-sized enterprises, we use a questionnaire to investigate the influence of social capital and its three dimensions - structural, relational, and cognitive capital - on the implementation of CE and the mediating effect of dynamic capabilities in explaining these relationships. Using regression analyses and structural equation modeling, we find that social capital is positively associated with CE implementation and dynamic capabilities partially mediate this relationship. Interestingly, our findings suggest that not all social capital dimensions are equally important for CE implementation. We theoretically and empirically explore the network forms of social capital and extend the CE literature by suggesting that dynamic capabilities help organizations leverage social capital to drive the implementation of CE practices. The findings of this study allow us to suggest several implications for managers and institutions. From a practical perspective, our study contributes to building circular production and service capabilities in small and medium-sized enterprises. Various CE activities can transform products and services to contribute to a better and more responsible world.

Keywords: circular economy, dynamic capabilities, SMEs, social capital

Procedia PDF Downloads 79
917 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 671
916 Hub Traveler Guidance Signage Evaluation via Panoramic Visualization Using Entropy Weight Method and TOPSIS

Authors: Si-yang Zhang, Chi Zhao

Abstract:

Comprehensive transportation hubs are important nodes of the transportation network, and their internal signage the functions as guidance and distribution assistance, which directly affects the operational efficiency of traffic in and around the hubs. Reasonably installed signage effectively attracts the visual focus of travelers and improves wayfinding efficiency. Among the elements of signage, the visual guidance effect is the key factor affecting the information conveyance, whom should be evaluated during design and optimization process. However, existing evaluation methods mostly focus on the layout, and are not able to fully understand if signage caters travelers’ need. This study conducted field investigations and developed panoramic videos for multiple transportation hubs in China, and designed survey accordingly. Human subjects are recruited to watch panoramic videos via virtual reality (VR) and respond to the surveys. In this paper, Pudong Airport and Xi'an North Railway Station were studied and compared as examples due to their high traveler volume and relatively well-developed traveler service systems. Visual attention was captured by eye tracker and subjective satisfaction ratings were collected through surveys. Entropy Weight Method (EWM) was utilized to evaluate the effectiveness of signage elements and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) was used to further rank the importance of the elements. The results show that the degree of visual attention of travelers significantly affects the evaluation results of guidance signage. Key factors affecting visual attention include accurate legibility, obstruction and defacement rates, informativeness, and whether signage is set up in a hierarchical manner.

Keywords: traveler guidance signage, panoramic video, visual attention, entropy weight method, TOPSIS

Procedia PDF Downloads 63
915 Impact of Tryptic Limited Hydrolysis on Bambara Protein-Gum Arabic Soluble Complexes Formation

Authors: Abiola A. Ojesanmi, Eric O. Amonsou

Abstract:

The formation of soluble complexes is usually within a narrow pH range characterized by weak interactions. Moreover, the rigid conformation of globular proteins restricts the number of charged groups capable of interacting with polysaccharides, thereby limiting food applications. Hence, this study investigated the impact of tryptic-limited hydrolysis on the formation of Bambara protein-gum arabic soluble complexes formation. The electrostatic interactions were monitored through turbidimetry analysis. The Bambara protein hydrolysates at a specified degree of hydrolysis, and DHs (2, 5, and 7.5) were characterized using size exclusion chromatography, zeta potential, surface hydrophobicity, and intrinsic fluorescence. The stability of the complexes was investigated using differential scanning calorimetry and rheometry. The limited tryptic hydrolysis significantly widened the pH range of the formation of soluble complexes, with DH 5 having a wider range (pH 7.0 - 4.3) compared to DH 2 and DH 7.5, while there was no notable difference in the optimum complexation pH of the insoluble complexes. Larger peptides (140, 118 kDa) were detected in DH 2 relative to 144, 70, and 61 kDa in DH 5, which were larger than 140, 118, 48, and 32 kDa in DH 7. 5. An increase in net negative charge (- 30 Mv for DH 7.5) and a slight shift in the net neutrality (from pH 4.9 to 4.3) of the hydrolysates were observed which consequently impacted the electrostatic interaction with gum arabic. There was exposure of the hydrophobic amino acids up to 4-fold in comparison with the isolate and a red shift in maximum fluorescence wavelength in DH dependent manner following the hydrolysis. The denaturation temperature of the soluble complex from the hydrolysates shifted to higher values, having DH 5 with the maximum temperature (94.24 °C). A highly interconnected gel-like soluble complex network was formed having DH 5 with a better structure relative to DH 2 and 7.5. The study showed the use of limited tryptic hydrolysis at DH 5 as an effective approach to modify Bambara protein and provided a more stable and wider pH range of formation for soluble complex, thereby enhancing the food application.

Keywords: Bambara groundnut, gum arabic, interaction, soluble complex

Procedia PDF Downloads 28
914 Development, Testing, and Application of a Low-Cost Technology Sulphur Dioxide Monitor as a Tool for use in a Volcanic Emissions Monitoring Network

Authors: Viveka Jackson, Erouscilla Joseph, Denise Beckles, Thomas Christopher

Abstract:

Sulphur Dioxide (SO2) has been defined as a non-flammable, non-explosive, colourless gas, having a pungent, irritating odour, and is one of the main gases emitted from volcanoes. Sulphur dioxide has been recorded in concentrations hazardous to humans (0.25 – 0.5 ppm (~650 – 1300 μg/m3), downwind of many volcanoes and hence warrants constant air-quality monitoring around these sites. It has been linked to an increase in chronic respiratory disease attributed to long-term exposures and alteration in lung and other physiological functions attributed to short-term exposures. Sulphur Springs in Saint Lucia is a highly active geothermal area, located within the Soufrière Volcanic Centre, and is a park widely visited by tourists and locals. It is also a current source of continuous volcanic emissions via its many fumaroles and bubbling pools, warranting concern by residents and visitors to the park regarding the effects of exposure to these gases. In this study, we introduce a novel SO2 measurement system for the monitoring and quantification of ambient levels of airborne volcanic SO2 using low-cost technology. This work involves the extensive production of low-cost SO2 monitors/samplers, as well as field examination in tandem with standard commercial samplers (SO2 diffusion tubes). It also incorporates community involvement in the volcanic monitoring process as non-professional users of the instrument. We intend to present the preliminary monitoring results obtained from the low-cost samplers, to identify the areas in the Park exposed to high concentrations of ambient SO2, and to assess the feasibility of the instrument for non-professional use and application in volcanic settings

Keywords: ambient SO2, community-based monitoring, risk-reduction, sulphur springs, low-cost

Procedia PDF Downloads 462
913 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India

Authors: Vinu Elias Jacob, Manoj Kumar Kini

Abstract:

Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.

Keywords: disaster management, resilience, spatial planning, spatial transformations

Procedia PDF Downloads 292
912 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 192
911 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”

Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani

Abstract:

The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.

Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density

Procedia PDF Downloads 121
910 Thick Data Techniques for Identifying Abnormality in Video Frames for Wireless Capsule Endoscopy

Authors: Jinan Fiaidhi, Sabah Mohammed, Petros Zezos

Abstract:

Capsule endoscopy (CE) is an established noninvasive diagnostic modality in investigating small bowel disease. CE has a pivotal role in assessing patients with suspected bleeding or identifying evidence of active Crohn's disease in the small bowel. However, CE produces lengthy videos with at least eighty thousand frames, with a frequency rate of 2 frames per second. Gastroenterologists cannot dedicate 8 to 15 hours to reading the CE video frames to arrive at a diagnosis. This is why the issue of analyzing CE videos based on modern artificial intelligence techniques becomes a necessity. However, machine learning, including deep learning, has failed to report robust results because of the lack of large samples to train its neural nets. In this paper, we are describing a thick data approach that learns from a few anchor images. We are using sound datasets like KVASIR and CrohnIPI to filter candidate frames that include interesting anomalies in any CE video. We are identifying candidate frames based on feature extraction to provide representative measures of the anomaly, like the size of the anomaly and the color contrast compared to the image background, and later feed these features to a decision tree that can classify the candidate frames as having a condition like the Crohn's Disease. Our thick data approach reported accuracy of detecting Crohn's Disease based on the availability of ulcer areas at the candidate frames for KVASIR was 89.9% and for the CrohnIPI was 83.3%. We are continuing our research to fine-tune our approach by adding more thick data methods for enhancing diagnosis accuracy.

Keywords: thick data analytics, capsule endoscopy, Crohn’s disease, siamese neural network, decision tree

Procedia PDF Downloads 152
909 The Comparison between Modelled and Measured Nitrogen Dioxide Concentrations in Cold and Warm Seasons in Kaunas

Authors: A. Miškinytė, A. Dėdelė

Abstract:

Road traffic is one of the main sources of air pollution in urban areas associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered as traffic-related air pollutant, which concentrations tend to be higher near highways, along busy roads and in city centres and exceedances are mainly observed in air quality monitoring stations located close to traffic. Atmospheric dispersion models can be used to examine emissions from many various sources and to predict the concentration of pollutants emitted from these sources into the atmosphere. The study aim was to compare modelled concentrations of nitrogen dioxide using ADMS-Urban dispersion model with air quality monitoring network in cold and warm seasons in Kaunas city. Modelled average seasonal concentrations of nitrogen dioxide for 2011 year have been verified with automatic air quality monitoring data from two stations in the city. Traffic station is located near high traffic street in industrial district and background station far away from the main sources of nitrogen dioxide pollution. The modelling results showed that the highest nitrogen dioxide concentration was modelled and measured in station located near intensive traffic street, both in cold and warm seasons. Modelled and measured nitrogen dioxide concentration was respectively 25.7 and 25.2 µg/m3 in cold season and 15.5 and 17.7 µg/m3 in warm season. While the lowest modelled and measured NO2 concentration was determined in background monitoring station, respectively 12.2 and 13.3 µg/m3 in cold season and 6.1 and 7.6 µg/m3 in warm season. The difference between monitoring station located near high traffic street and background monitoring station showed that better agreement between modelled and measured NO2 concentration was observed at traffic monitoring station.

Keywords: air pollution, nitrogen dioxide, modelling, ADMS-Urban model

Procedia PDF Downloads 403
908 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS

Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana

Abstract:

Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).

Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI

Procedia PDF Downloads 109
907 Modeling and Temperature Control of Water-cooled PEMFC System Using Intelligent Algorithm

Authors: Chen Jun-Hong, He Pu, Tao Wen-Quan

Abstract:

Proton exchange membrane fuel cell (PEMFC) is the most promising future energy source owing to its low operating temperature, high energy efficiency, high power density, and environmental friendliness. In this paper, a comprehensive PEMFC system control-oriented model is developed in the Matlab/Simulink environment, which includes the hydrogen supply subsystem, air supply subsystem, and thermal management subsystem. Besides, Improved Artificial Bee Colony (IABC) is used in the parameter identification of PEMFC semi-empirical equations, making the maximum relative error between simulation data and the experimental data less than 0.4%. Operation temperature is essential for PEMFC, both high and low temperatures are disadvantageous. In the thermal management subsystem, water pump and fan are both controlled with the PID controller to maintain the appreciate operation temperature of PEMFC for the requirements of safe and efficient operation. To improve the control effect further, fuzzy control is introduced to optimize the PID controller of the pump, and the Radial Basis Function (RBF) neural network is introduced to optimize the PID controller of the fan. The results demonstrate that Fuzzy-PID and RBF-PID can achieve a better control effect with 22.66% decrease in Integral Absolute Error Criterion (IAE) of T_st (Temperature of PEMFC) and 77.56% decrease in IAE of T_in (Temperature of inlet cooling water) compared with traditional PID. In the end, a novel thermal management structure is proposed, which uses the cooling air passing through the main radiator to continue cooling the secondary radiator. In this thermal management structure, the parasitic power dissipation can be reduced by 69.94%, and the control effect can be improved with a 52.88% decrease in IAE of T_in under the same controller.

Keywords: PEMFC system, parameter identification, temperature control, Fuzzy-PID, RBF-PID, parasitic power

Procedia PDF Downloads 80