Search results for: PPI networks
257 The Relationship between Elderly People with Depression and Built Environment Factors
Authors: Hung-Chun Lin, Tzu-Yuan Chao
Abstract:
As the population aging has become an inevitable trend globally, issues of improving the well-being of elderly people in urban areas have been a challenging task for urban planners. Recent studies of ageing trend have also expended to explore the relationship between the built environment and mental condition of elderly people. These studies have proved that even though the built environment may not necessarily play the decisive role in affecting mental health, it can have positive impacts on individual mental health by promoting social linkages and social networks among older adults. There has been a great amount of relevant research examined the impact of the built environment attributes on depression in the elderly; however, most were conducted in the Western countries. Little attention has been paid in Asian cities with contrarily high density and mix-use urban contexts such as Taiwan regarding how the built environment attributes related to depression in elderly people. Hence, more empirical cross-principle studies are needed to explore the possible impacts of Asia urban characteristics on older residents’ mental condition. This paper intends to focus on Tainan city, the fourth biggest metropolis in Taiwan. We first analyze with data from National Health Insurance Research Database to pinpoint the empirical study area where residing most elderly patients, aged over 65, with depressive disorders. Secondly, we explore the relationship between specific attributes of the built environment collected from previous studies and elderly individuals who suffer from depression, under different socio-cultural and networking circumstances. To achieve the results, the research methods adopted in this study include questionnaire and database analysis, and the results will be proceeded by correlation analysis. In addition, through literature review, by generalizing the built environment factors that have been used in Western research to evaluate the relationship between built environment and older individuals with depressive disorders, a set of local evaluative indicators of the built environment for future studies will be proposed as well. In order to move closer to develop age-friendly cities and improve the well-being for the elderly in Taiwan, the findings of this paper can provide empirical results to grab planners’ attention for how built environment makes the elderly feel and to reconsider the relationship between them. Furthermore, with an interdisciplinary topic, the research results are expected to make suggestions for amending the procedures of drawing up an urban plan or a city plan from a different point of view.Keywords: built environment, depression, elderly, Tainan
Procedia PDF Downloads 124256 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 216255 The Coexistence of Creativity and Information in Convergence Journalism: Pakistan's Evolving Media Landscape
Authors: Misha Mirza
Abstract:
In recent years, the definition of journalism in Pakistan has changed, so has the mindset of people and their approach towards a news story. For the audience, news has become more interesting than a drama or a film. This research thus provides an insight into Pakistan’s evolving media landscape. It tries not only to bring forth the outcomes of cross-platform cooperation among print and broadcast journalism but also gives an insight into the interactive data visualization techniques being used. The storytelling in journalism in Pakistan has evolved from depicting merely the truth to tweaking, fabricating and producing docu-dramas. It aims to look into how news is translated to a visual. Pakistan acquires a diverse cultural heritage and by engaging audience through media, this history translates into the storytelling platform today. The paper explains how journalists are thriving in a converging media environment and provides an analysis of the narratives in television talk shows today.’ Jack of all, master of none’ is being challenged by the journalists today. One has to be a quality information gatherer and an effective storyteller at the same time. Are journalists really looking more into what sells rather than what matters? Express Tribune is a very popular news platform among the youth. Not only is their newspaper more attractive than the competitors but also their style of narrative and interactive web stories lead to well-rounded news. Interviews are used as the basic methodology to get an insight into how data visualization is compassed. The quest for finding out the difference between visualization of information versus the visualization of knowledge has led the author to delve into the work of David McCandless in his book ‘Knowledge is beautiful’. Journalism in Pakistan has evolved from information to combining knowledge, infotainment and comedy. What is being criticized the most by the society most often becomes the breaking news. Circulation in today’s world is carried out in cultural and social networks. In recent times, we have come across many examples where people have gained overnight popularity by releasing songs with substandard lyrics or senseless videos perhaps because creativity has taken over information. This paper thus discusses the various platforms of convergence journalism from Pakistan’s perspective. The study concludes with proving how Pakistani pop culture Truck art is coexisting with all the platforms in convergent journalism. The changing media landscape thus challenges the basic rules of journalism. The slapstick humor and ‘jhatka’ in Pakistani talk shows has evolved from the Pakistani truck art poetry. Mobile journalism has taken over all the other mediums of journalism; however, the Pakistani culture coexists with the converging landscape.Keywords: convergence journalism in Pakistan, data visualization, interactive narrative in Pakistani news, mobile journalism, Pakistan's truck art culture
Procedia PDF Downloads 285254 Understanding Strategic Engagement on the Conversation Table: Countering Terrorism in Nigeria
Authors: Anisah Ari
Abstract:
Effects of organized crime permeate all facets of life, including public health, socio-economic endeavors, and human security. If any element of this is affected, it impacts large-scale national and global interest. Seeking to address terrorist networks through technical thinking is like trying to kill a weed by just cutting off its branches. It will re-develop and expand in proportions beyond one’s imagination, even in horrific ways that threaten human security. The continent of Africa has been bedeviled by this menace, with little or no solution to the problem. Nigeria is dealing with a protracted insurgency that is perpetrated by a sect against any form of westernization. Reimagining approaches to dealing with pressing issues like terrorism may require engaging the right set of people in the conversation for any sustainable change. These are people who have lived through the daily effects of the violence that ensues from the activities of terrorist activities. Effective leadership is required for an inclusive process, where spaces are created for diverse voices to be heard, and multiple perspectives are listened to, and not just heard, that supports a determination of the realistic outcome. Addressing insurgency in Nigeria has experienced a lot of disinformation and uncertainty. This may be in part due to poor leadership or an iteration of technical solutions to adaptive challenge peacemaking efforts in Nigeria has focused on behaviors, attitudes and practices that contribute to violence. However, it is important to consider the underlying issues that build-up, ignite and fan the flames of violence—looking at conflict as a complex system, issues like climate change, low employment rates, corruption and the impunity of discrimination due to ethnicity and religion. This article will be looking at an option of the more relational way of addressing insurgency through adaptive approaches that embody engagement and solutions with the people rather than for the people. The construction of a local turn in peacebuilding is informed by the need to create a locally driven and sustained peace process that embodies the culture and practices of the people in enacting an everyday peace beyond just a perennial and universalist outlook. A critical analysis that explores the socially identified individuals and situations will be made, considering the more adaptive approach to a complex existential challenge rather than a universalist frame. Case Study and Ethnographic research approach to understand what other scholars have documented on the matter and also a first-hand understanding of the experiences and viewpoints of the participants.Keywords: terrorism, adaptive, peace, culture
Procedia PDF Downloads 105253 Migrants as Change Agents: A Study of Social Remittances between Finland and Russia
Authors: Ilona Bontenbal
Abstract:
In this research, the potential for societal change is researched through the idea of migrants as change agents. The viewpoint is on the potential that migrants have for affecting societal change in their country of origin through transmitting transnational peer-to-peer information. The focus is on the information that Russian migrants living in Finland transmit about their experiences and attitudes regarding the Nordic welfare state, its democratic foundation and the social rights embedded in it, to their family and friends in their country of origin. The welfare provision and level of democracy are very different in the two neighbouring countries of Finland and Russia. Finland is a Nordic welfare state with strong democratic institutions and a comprehensive actualizing of civil and social rights. In Russia, the state of democracy has on the other hand been declining, and the social and civil rights of its citizens are constantly undermined. Due to improvements in communications and travel technology, migrants can easily and relatively cheaply stay in contact with their family and friends in their country of origin. This is why it is possible for migrants to act as change agents. By telling about their experiences and attitudes about living in a democratic welfare state, migrants can affect what people in the country or origin know and think about welfare, democracy, and social rights. This phenomenon is approached through the concept of social remittances. Social remittances broadly stand for the ideas, know-how, world views, attitudes, norms of behavior, and social capital that flows through transnational networks from receiving- to sending- country communities and the other way around. The viewpoint is that historically and culturally formed democratic welfare models cannot be copied entirely nor that each country should achieve identical development paths, but rather that migrants themselves choose which aspects they see as important to remit to their acquaintances in their country of origin. This way the potential for social change and the agency of the migrants is accentuated. The empirical research material of this study is based on 30 qualitative interviews with Russian migrants living in Finland. Russians are the largest migrant group in Finland and Finland is a popular migration destination especially for individuals living in North-West Russia including the St. Petersburg region. The interviews are carried out in 2018-2019. The preliminary results indicate that Russian migrants discuss social rights and welfare a lot with their family members and acquaintances living in Russia. In general, the migrants feel that they have had an effect on the way that their friends and family think about Finland, the West, social rights and welfare provision. Democracy, on the other hand, is seen as a more difficult and less discussed topic. The transformative potential that the transmitted information and attitudes could have outside of the immediate circle of acquaintances on larger societal change is seen as ambiguous although not negligible.Keywords: migrants as change agents, Russian migrants, social remittances, welfare and democracy
Procedia PDF Downloads 193252 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 304251 Strengthening by Assessment: A Case Study of Rail Bridges
Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas
Abstract:
The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening
Procedia PDF Downloads 311250 Contribution at Dimensioning of the Energy Dissipation Basin
Authors: M. Aouimeur
Abstract:
The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment
Procedia PDF Downloads 584249 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout
Authors: Diamant Irene, Dar Tamar
Abstract:
Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.Keywords: conservation of resources, burnout, time pressure, time perspective
Procedia PDF Downloads 177248 Generalized Synchronization in Systems with a Complex Topology of Attractor
Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov
Abstract:
Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents
Procedia PDF Downloads 278247 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage
Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik
Abstract:
The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis
Procedia PDF Downloads 320246 Data Analysis Tool for Predicting Water Scarcity in Industry
Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse
Abstract:
Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.Keywords: data mining, industry, machine Learning, shortage, water resources
Procedia PDF Downloads 122245 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 231244 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0
Authors: Harris Niavis, Dimitra Politaki
Abstract:
The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.Keywords: blockchain, data quality, industry4.0, product quality
Procedia PDF Downloads 191243 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 204242 Preparation of Allyl BODIPY for the Click Reaction with Thioglycolic Acid
Authors: Chrislaura Carmo, Luca Deiana, Mafalda Laranjo, Abilio Sobral, Armando Cordova
Abstract:
Photodynamic therapy (PDT) is currently used for the treatment of malignancies and premalignant tumors. It is based on the capture of a photosensitizing molecule (PS) which, when excited by light at a certain wavelength, reacts with oxygen and generates oxidizing species (radicals, singlet oxygen, triplet species) in target tissues, leading to cell death. BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indaceno) derivatives are emerging as important candidates for photosensitizer in photodynamic therapy of cancer cells due to their high triplet quantum yield. Today these dyes are relevant molecules in photovoltaic materials and fluorescent sensors. In this study, it will be demonstrated the possibility that BODIPY can be covalently linked to thioglycolic acid through the click reaction. Thiol−ene click chemistry has become a powerful synthesis method in materials science and surface modification. The design of biobased allyl-terminated precursors with high renewable carbon content for the construction of the thiol-ene polymer networks is essential for sustainable development and green chemistry. The work aims to synthesize the BODIPY (10-(4-(allyloxy) phenyl)-2,8-diethyl-5,5-difluoro-1,3,7,9-tetramethyl-5H-dipyrrolo[1,2-c:2',1'-f] [1,3,2] diazaborinin-4-ium-5-uide) and to click reaction with Thioglycolic acid. BODIPY was synthesized by the condensation reaction between aldehyde and pyrrole in dichloromethane, followed by in situ complexation with BF3·OEt2 in the presence of the base. Then it was functionalized with allyl bromide to achieve the double bond and thus be able to carry out the click reaction. The thiol−ene click was performed using DMPA (2,2-Dimethoxy-2-phenylacetophenone) as a photo-initiator in the presence of UV light (320–500 nm) in DMF at room temperature for 24 hours. Compounds were characterized by standard analytical techniques, including UV-Vis Spectroscopy, 1H, 13C, 19F NMR and mass spectroscopy. The results of this study will be important to link BODIPY to polymers through the thiol group offering a diversity of applications and functionalization. This new molecule can be tested as third-generation photosensitizers, in which the dye is targeted by antibodies or nanocarriers by cells, mainly in cancer cells, PDT and Photodynamic Antimicrobial Chemotherapy (PACT). According to our studies, it was possible to visualize a click reaction between allyl BODIPY and thioglycolic acid. Our team will also test the reaction with other thiol groups for comparison. Further, we will do the click reaction of BODIPY with a natural polymer linked with a thiol group. The results of the above compounds will be tested in PDT assays on various lung cancer cell lines.Keywords: bodipy, click reaction, thioglycolic acid, allyl, thiol-ene click
Procedia PDF Downloads 133241 Envy and Schadenfreude Domains in a Model of Neurodegeneration
Authors: Hernando Santamaría-García, Sandra Báez, Pablo Reyes, José Santamaría-García, Diana Matallana, Adolfo García, Agustín Ibañez
Abstract:
The study of moral emotions (i.e., Schadenfreude and envy) is critical to understand the ecological complexity of everyday interactions between cognitive, affective, and social cognition processes. Most previous studies in this area have used correlational imaging techniques and framed Schadenfreude and envy as monolithic domains. Here, we profit from a relevant neurodegeneration model to disentangle the brain regions engaged in three dimensions of Schadenfreude and envy: deservingness, morality, and legality. We tested 20 patients with behavioral variant frontotemporal dementia (bvFTD), 24 patients with Alzheimer’s disease (AD), as a contrastive neurodegeneration model, and 20 healthy controls on a novel task highlighting each of these dimensions in scenarios eliciting Schadenfreude and envy. Compared with the AD and control groups, bvFTD patients obtained significantly higher scores on all dimensions for both emotions. Interestingly, the legal dimension for both envy and Schadenfreude elicited higher emotional scores than the deservingness and moral dimensions. Furthermore, correlational analyses in bvFTD showed that higher envy and Schadenfreude scores were associated with greater deficits in social cognition, inhibitory control, and behavior. Brain anatomy findings (restricted to bvFTD and controls) confirmed differences in how these groups process each dimension. Schadenfreude was associated with the ventral striatum in all subjects. Also, in bvFTD patients, increased Schadenfreude across dimensions was negatively correlated with regions supporting social-value rewards, mentalizing, and social cognition (frontal pole, temporal pole, angular gyrus and precuneus). In all subjects, all dimensions of envy positively correlated with the volume of the anterior cingulate cortex, a region involved in processing unfair social comparisons. By contrast, in bvFTD patients, the intensified experience of envy across all dimensions was negatively correlated with a set of areas subserving social cognition, including the prefrontal cortex, the parahippocampus, and the amygdala. Together, the present results provide the first lesion-based evidence for the multidimensional nature of the emotional experiences of envy and Schadenfreude. Moreover, this is the first demonstration of a selective exacerbation of envy and Schadenfreude in bvFTD patients, probably triggered by atrophy to social cognition networks. Our results offer new insights into the mechanisms subserving complex emotions and moral cognition in neurodegeneration, paving the way for groundbreaking research on their interaction with other cognitive, social, and emotional processes.Keywords: social cognition, moral emotions, neuroimaging, frontotemporal dementia
Procedia PDF Downloads 293240 To Live on the Margins: A Closer Look at the Social and Economic Situation of Illegal Afghan Migrants in Iran
Authors: Abdullah Mohammadi
Abstract:
Years of prolong war in Afghanistan has led to one of the largest refugee and migrant populations in the contemporary world. During this continuous unrest which began in 1970s (by military coup, Marxist revolution and the subsequent invasion of USSR), over one-third of the population migrated to neighboring countries, especially Pakistan and Iran. After the Soviet Army withdrawal in 1989, a new wave of conflicts emerged between rival Afghan groups and this led to new refugees. Taliban period, also, created its own refugees. During all these years, I.R. of Iran has been one of the main destinations of Afghan refugees and migrants. At first, due to the political situation after Islamic Revolution, Iran government didn’t restrict the entry of Afghan refugees. Those who came first in Iran received ID cards and had access to education and healthcare services. But in 1990s, due to economic and social concerns, Iran’s policy towards Afghan refugees and migrants changed. The government has tried to identify and register Afghans in Iran and limit their access to some services and jobs. Unfortunately, there are few studies on Afghan refugees and migrants’ situation in Iran and we have a dim and vague picture of them. Of the few studies done on this group, none of them focus on the illegal Afghan migrants’ situation in Iran. Here, we tried to study the social and economic aspects of illegal Afghan migrants’ living in Iran. In doing so, we interviewed 24 illegal Afghan migrants in Iran. The method applied for analyzing the data is thematic analysis. For the interviews, we chose family heads (17 men and 7 women). According to the findings, illegal Afghan migrants’ socio-economic situation in Iran is very undesirable. Its main cause is the marginalization of this group which is resulted from government policies towards Afghan migrants. Most of the illegal Afghan migrants work in unskilled and inferior jobs and live in rent houses on the margins of cities and villages. None of them could buy a house or vehicle due to law. Based on their income, they form one of the lowest, unprivileged groups in the society. Socially, they face many problems in their everyday life: social insecurity, harassment and violence, misuse of their situation by police and people, lack of education opportunity, etc. In general, we may conclude that illegal Afghan migrant have little adaptation with Iran’s society. They face severe limitations compared to legal migrants and refugees and have no opportunity for upward social mobility. However, they have managed some strategies to face these difficulties including: seeking financial and emotional helps from family and friendship networks, sending one of the family members to third country (mostly to European countries), establishing self-administered schools for children (schools which are illegal and run by Afghan educated youth).Keywords: illegal Afghan migrants, marginalization, social insecurity, upward social mobility
Procedia PDF Downloads 318239 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 122238 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery
Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina
Abstract:
In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries
Procedia PDF Downloads 163237 Immigrant Women's Voices and Integrating Feminism into Migration Theory
Authors: Florence Nyemba, Rufaro Chitiyo
Abstract:
This work features the voices of women as they describe their experiences living in the diaspora either with their families or alone. The contributing authors of this work pursued this project to understand how the women’s personal lives (and those of their families back home) changed (both positively and negatively). The work addressed the following important questions, what is female migration? What are the factors causing women to migrate? What types of migration do women engage in? What is the influence of family relationships on migration? What are the challenges of migration? How do migrant women maintain ties with their home countries? What is the role of social networks in migration? How can feminist theories and methodologies be incorporated in migration studies? Women continue to contribute significantly to mass movements of people across the yet, their voices silent in the literature on migration. History shows that women have always been on the move trying to make a living just like their male counterparts. Whether they migrate as spouses, daughters, or alone, women make up a sizeable portion of migration statistics around the world. These women are migrating independently without the accompaniment of male relatives. This calls for the need to expand research on women as independent migrants without generalizing their experiences as in the case with early studies on international migration. The goal of this work is to offer a rich and detailed description of the lives of immigrant women across the globe using theoretical frameworks that advance gender and migration research. Methodology: This work invited scholars and researchers from across the globe whose research interests were in gender and migration. The work incorporated a variety of methodologies for data collection and analysis, which included oral narratives, interviews, systematic literature reviews and interviews. Conclusion: There is a considerable amount of interest in various topics on gender, violence, and equality throughout social science disciplines in higher education. Therefore, the three major topics covered in this work, Women’s Immigration: Theories and Methodologies, Women as Migrant Workers, and Women as Refugees, Asylees, and Permanent Migrants, can be of interest across social sciences disciplines. Feminist theories can expand the curriculum on identity and gendered roles and norms in societies. Findings of this work advance knowledge of population movements across the globe. This work will also appeal to students and scholars wanting to expand their knowledge on women and migration, migration theories, gender violence, and women empowerment. The topics and issues presented in this work will also assist the international community and lawyers concerned with global migration.Keywords: gender, feminism, identity formation, international migration
Procedia PDF Downloads 141236 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 323235 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis
Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain
Abstract:
Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management
Procedia PDF Downloads 205234 Impact of Farm Settlements' Facilities on Farm Patronage in Oyo State
Authors: Simon Ayorinde Okanlawon
Abstract:
The youths’ prevalent negative attitude to farming is partly due to amenities and facilities found in the urban centers at the expense of the rural areas. Hence, there is the need to create a befitting and conducive farm environment to retain farm employees and attract the youth to farming. This can be achieved through the provision of services and amenities that will ensure a comfortable standard of living higher than that obtained by a person of equal status in other forms of employment in urban centers, thereby eliminating the psychological feeling of lowered self-esteem associated with farming. This study assessed farm settlements’ facilities and patronage in Oyo State with a view to using the information to encourage sustainable agriculture in Nigeria. The study becomes necessary because of the dearth of information on the state of facilities in the farm settlements as it affects patronage of farm settlements for sustainable agriculture in the developing countries like Nigeria. The study utilized three purposely selected farm settlements- Ogbomoso, Fasola and Ilora out of the seven existing ones n Oyo State. One hundred percent (100%) of the 262 residential buildings in the three settlements were sampled, from where a household head from each of the buildings was randomly chosen. This translates to 262 household heads served with questionnaire out of which 47.7% of the questionnaires were recovered. Information obtained included respondents’ residency categories, residents’ status, residency years, housing types, types of holding and number of acres/holding. Others include the socio-economic attributes such as age, gender, income, educational status of respondents, assessment of existing facilities in the selected sites, the level of patronage of the farm settlements including perceived pull factors that can enhance farm settlements patronage. The study revealed that the residents were not satisfied with the adequacy and quality of all the facilities available in their settlements. Residents’ satisfaction with infrastructural facilities cannot be statistically linked with location across the study area. Findings suggested that residents of Ogbomoso farm settlements were not enjoying adequate provision of water supply and road as much as those from Ilora and Fasola. Patronage of the farm settlements were largely driven by farming activities and sale of farm produce. The respondents agreed that provision of farm resort centers, standard recreational and tourism facilities, vacation employment opportunities for youths, functional internet and communication networks among others are likely to boost the level of patronage of the farm settlements. The study concluded that improvement of the facilities both in quality and quantity will encourage the youths in going back to farming. It then recommends that maintenance of existing facilities and provision of more facilities such as resort centers be ensured.Keywords: encourage, farm settlements' facilities, Oyo state, patronage
Procedia PDF Downloads 232233 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 72232 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing
Authors: Muhammad Abdul Basit Memon
Abstract:
The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing
Procedia PDF Downloads 326231 Knowledge Transfer through Entrepreneurship: From Research at the University to the Consolidation of a Spin-off Company
Authors: Milica Lilic, Marina Rosales Martínez
Abstract:
Academic research cannot be oblivious to social problems and needs, so projects that have the capacity for transformation and impact should have the opportunity to go beyond the University circles and bring benefit to society. Apart from patents and R&D research contracts, this opportunity can be achieved through entrepreneurship as one of the most direct tools to turn knowledge into a tangible product. Thus, as an example of good practices, it is intended to analyze the case of an institutional entrepreneurship program carried out at the University of Seville, aimed at researchers interested in assessing the business opportunity of their research and expanding their knowledge on procedures for the commercialization of technologies used at academic projects. The program is based on three pillars: training, teamwork sessions and networking. The training includes aspects such as product-client fit, technical-scientific and economic-financial feasibility of a spin-off, institutional organization and decision making, public and private fundraising, and making the spin-off visible in the business world (social networks, key contacts, corporate image and ethical principles). On the other hand, the teamwork sessions are guided by a mentor and aimed at identifying research results with potential, clarifying financial needs and procedures to obtain the necessary resources for the consolidation of the spin-off. This part of the program is considered to be crucial in order for the participants to convert their academic findings into a business model. Finally, the networking part is oriented to workshops about the digital transformation of a project, the accurate communication of the product or service a spin-off offers to society and the development of transferable skills necessary for managing a business. This blended program results in the final stage where each team, through an elevator pitch format, presents their research turned into a business model to an experienced jury. The awarded teams get a starting capital for their enterprise and enjoy the opportunity of formally consolidating their spin-off company at the University. Studying the results of the program, it has been shown that many researchers have basic or no knowledge of entrepreneurship skills and different ways to turn their research results into a business model with a direct impact on society. Therefore, the described program has been used as an example to highlight the importance of knowledge transfer at the University and the role that this institution should have in providing the tools to promote entrepreneurship within it. Keeping in mind that the University is defined by three main activities (teaching, research and knowledge transfer), it is safe to conclude that the latter, and the entrepreneurship as an expression of it, is crucial in order for the other two to comply with their purpose.Keywords: good practice, knowledge transfer, a spin-off company, university
Procedia PDF Downloads 148230 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 130229 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 79228 Molecular Characterization of Arginine Sensing Response in Unravelling Host-Pathogen Interactions in Leishmania
Authors: Evanka Madan, Madhu Puri, Dan Zilberstein, Rohini Muthuswami, Rentala Madhubala
Abstract:
The extensive interaction between the host and pathogen metabolic networks decidedly shapes the outcome of infection. Utilization of arginine by the host and pathogen is critical for determining the outcome of pathogenic infection. Infections with L. donovani, an intracellular parasite, will lead to an extensive competition of arginine between the host and the parasite donovani infection. One of the major amino acid (AA) sensing signaling pathways in mammalian cells are the mammalian target of rapamycin complex I (mTORC1) pathway. mTORC1, as a sensor of nutrient, controls numerous metabolic pathways. Arginine is critical for mTORC1 activation. SLC38A9 is the arginine sensor for the mTORC1, being activated during arginine sufficiency. L. donovani transport arginine via a high-affinity transporter (LdAAP3) that is rapidly up-regulated by arginine deficiency response (ADR) in intracellular amastigotes. This study, to author’s best knowledge, investigates the interaction between two arginine sensing systems that act in the same compartment, the lysosome. One is important for macrophage defense, and the other is essential for pathogen virulence. We hypothesize that the latter modulates lysosome arginine to prevent host defense response. The work presented here identifies an upstream regulatory role of LdAAP3 in regulating the expression of SLC38A9-mTORC1 pathway, and consequently, their function in L. donovani infected THP-1 cells cultured in 0.1 mM and 1.5 mM arginine. It was found that in physiological levels of arginine (0.1 mM), infecting THP-1 with Leishmania leads to increased levels of SLC38A9 and mTORC1 via an increase in the expression of RagA. However, the reversal was observed with LdAAP3 mutants, reflecting the positive regulatory role of LdAAP3 on the host SLC38A9. At the molecular level, upon infection, mTORC1 and RagA were found to be activated at the surface of phagolysosomes which was found to form a complex with phagolysosomal localized SLC38A9. To reveal the relevance of SLC38A9 under physiological levels of arginine, endogenous SLC38A9 was depleted and a substantial reduction in the expression of host mTORC1, its downstream active substrate, p-P70S6K1 and parasite LdAAP3, was observed, thereby showing that silencing SLC38A9 suppresses ADR. In brief, to author’s best knowledge, these results reveal an upstream regulatory role of LdAAP3 in manipulating SLC38A9 arginine sensing in host macrophages. Our study indicates that intra-macrophage survival of L. donovani depends on the availability and transport of extracellular arginine. An understanding of the sensing pathway of both parasite and host will open a new perspective on the molecular mechanism of host-parasite interaction and consequently, as a treatment for Leishmaniasis.Keywords: arginine sensing, LdAAP3, L. donovani, mTORC1, SLC38A9, THP-1
Procedia PDF Downloads 127