Search results for: time delay neural network
18848 Social Network Roles in Organizations: Influencers, Bridges, and Soloists
Authors: Sofia Dokuka, Liz Lockhart, Alex Furman
Abstract:
Organizational hierarchy, traditionally composed of individual contributors, middle management, and executives, is enhanced by the understanding of informal social roles. These roles, identified with organizational network analysis (ONA), might have an important effect on organizational functioning. In this paper, we identify three social roles – influencers, bridges, and soloists, and provide empirical analysis based on real-world organizational networks. Influencers are employees with broad networks and whose contacts also have rich networks. Influence is calculated using PageRank, initially proposed for measuring website importance, but now applied in various network settings, including social networks. Influencers, having high PageRank, become key players in shaping opinions and behaviors within an organization. Bridges serve as links between loosely connected groups within the organization. Bridges are identified using betweenness and Burt’s constraint. Betweenness quantifies a node's control over information flows by evaluating its role in the control over the shortest paths within the network. Burt's constraint measures the extent of interconnection among an individual's contacts. A high constraint value suggests fewer structural holes and lesser control over information flows, whereas a low value suggests the contrary. Soloists are individuals with fewer than 5 stable social contacts, potentially facing challenges due to reduced social interaction and hypothetical lack of feedback and communication. We considered social roles in the analysis of real-world organizations (N=1,060). Based on data from digital traces (Slack, corporate email and calendar) we reconstructed an organizational communication network and identified influencers, bridges and soloists. We also collected employee engagement data through an online survey. Among the top-5% of influencers, 10% are members of the Executive Team. 56% of the Executive Team members are part of the top influencers group. The same proportion of top influencers (10%) is individual contributors, accounting for just 0.6% of all individual contributors in the company. The majority of influencers (80%) are at the middle management level. Out of all middle managers, 19% hold the role of influencers. However, individual contributors represent a small proportion of influencers, and having information about these individuals who hold influential roles can be crucial for management in identifying high-potential talents. Among the bridges, 4% are members of the Executive Team, 16% are individual contributors, and 80% are middle management. Predominantly middle management acts as a bridge. Bridge positions of some members of the executive team might indicate potential micromanagement on the leader's part. Recognizing the individuals serving as bridges in an organization uncovers potential communication problems. The majority of soloists are individual contributors (96%), and 4% of soloists are from middle management. These managers might face communication difficulties. We found an association between being an influencer and attitude toward a company's direction. There is a statistically significant 20% higher perception that the company is headed in the right direction among influencers compared to non-influencers (p < 0.05, Mann-Whitney test). Taken together, we demonstrate that considering social roles in the company might indicate both positive and negative aspects of organizational functioning that should be considered in data-driven decision-making.Keywords: organizational network analysis, social roles, influencer, bridge, soloist
Procedia PDF Downloads 10518847 A Bio-Inspired Approach for Self-Managing Wireless Sensor and Actor Networks
Authors: Lyamine Guezouli, Kamel Barka, Zineb Seghir
Abstract:
Wireless sensor and actor networks (WSANs) present a research challenge for different practice areas. Researchers are trying to optimize the use of such networks through their research work. This optimization is done on certain criteria, such as improving energy efficiency, exploiting node heterogeneity, self-adaptability and self-configuration. In this article, we present our proposal for BIFSA (Biologically-Inspired Framework for Wireless Sensor and Actor networks). Indeed, BIFSA is a middleware that addresses the key issues of wireless sensor and actor networks. BIFSA consists of two types of agents: sensor agents (SA) that operate at the sensor level to collect and transport data to actors and actor agents (AA) that operate at the actor level to transport data to base stations. Once the sensor agent arrives at the actor, it becomes an actor agent, which can exploit the resources of the actors and vice versa. BIFSA allows agents to evolve their genetic structures and adapt to the current network conditions. The simulation results show that BIFSA allows the agents to make better use of all the resources available in each type of node, which improves the performance of the network.Keywords: wireless sensor and actor networks, self-management, genetic algorithm, agent.
Procedia PDF Downloads 8918846 Asymmetric of the Segregation-Enhanced Brazil Nut Effect
Authors: Panupat Chaiworn, Soraya lama
Abstract:
We study the motion of particles in cylinders which are subjected to a sinusoidal vertical vibration. We measure the rising time of a large intruder from the bottom of the container to free surface of the bed particles and find that the rising time as a function of intruder density increases to a maximum and then decreases monotonically. The result is qualitatively accord to the previous findings in experiments using relative humidity of the bed particles and found speed convection of the bed particles containers it moving slowly, and the rising time of the intruder where a minimal instead of maximal rising time in the small density region was found. Our experimental results suggest that the topology of the container plays an important role in the Brazil nut effect.Keywords: granular particles, Brazil nut effect, cylinder container, vertical vibration, convection
Procedia PDF Downloads 52818845 Path Planning for Multiple Unmanned Aerial Vehicles Based on Adaptive Probabilistic Sampling Algorithm
Authors: Long Cheng, Tong He, Iraj Mantegh, Wen-Fang Xie
Abstract:
Path planning is essential for UAVs (Unmanned Aerial Vehicle) with autonomous navigation in unknown environments. In this paper, an adaptive probabilistic sampling algorithm is proposed for the GPS-denied environment, which can be utilized for autonomous navigation system of multiple UAVs in a dynamically-changing structured environment. This method can be used for Unmanned Aircraft Systems Traffic Management (UTM) solutions and in autonomous urban aerial mobility, where a number of platforms are expected to share the airspace. A path network is initially built off line based on available environment map, and on-board sensors systems on the flying UAVs are used for continuous situational awareness and to inform the changes in the path network. Simulation results based on MATLAB and Gazebo in different scenarios and algorithms performance measurement show the high efficiency and accuracy of the proposed technique in unknown environments.Keywords: path planning, adaptive probabilistic sampling, obstacle avoidance, multiple unmanned aerial vehicles, unknown environments
Procedia PDF Downloads 15618844 Electroencephalogram Signals Controlling a Parallax Boe-Bot Robot
Authors: Nema M. Salem, Hanan A. Altukhaifi, Amal Mukhtar, Reemaz K. Hetaimish
Abstract:
Recently, BCI field of research has gained a lot of interest. Apart from motor neuroprosthetics, many studies showed the possibility of controlling a virtual environment of a videogame using the acquired electroencephalogram signals (EEG) from the gamer. In addition, another study had successfully moved a farm tractor using the human’s EEG signals. This article utilizes the use of EEG signals, as a source of technology, in controlling a Parallax Boe-Bot robot. The commercial Emotive Epoc headset has been used in acquiring the EEG signals from rested subjects. Because the human's visual cortex can successfully differentiate between different colors, the red and green colors are used as visual stimuli for generating EEG signals using the Epoc. Arduino and Labview are used to translate the virtually pressed keys into instructions controlling the motion and rotation of the robot. Optimistic results have been achieved except for minor delay and accuracy in the robot’s response.Keywords: BCI, Emotiv Epoc headset, EEG, Labview, Arduino applications, robot
Procedia PDF Downloads 52218843 Highway Capacity and Level of Service
Authors: Kidist Mesfin Nguse
Abstract:
Ethiopia is the second most densely populated nation in Africa, and about 121 million people as the 2022 Ethiopia population live report recorded. In recent years, the Ethiopian government (GOE) has been gradually growing its road network. With 138,127 kilometers (85,825 miles) of all-weather roads as of the end of 2018–19, Ethiopia possessed just 39% of the nation's necessary road network and lacked a well-organized system. The Ethiopian urban population report recorded that about 21% of the population lives in urban areas, and the high population, coupled with growth in various infrastructures, has led to the migration of the workforce from rural areas to cities across the country. In main roads, the heterogeneous traffic flow with various operational features makes it more unfavorable, causing frequent congestion in the stretch of road. The Level of Service (LOS), a qualitative measure of traffic, is categorized based on the operating conditions in the traffic stream. Determining the capacity and LOS for this city is very crucial as this affects the planning and design of traffic systems and their operation, and the allocation of route selection for infrastructure building projects to provide for a considerably good level of service.Keywords: capacity, level of service, traffic volume, free flow speed
Procedia PDF Downloads 5118842 Agegraphic Dark Energy with GUP
Authors: H. R. Fazlollahi
Abstract:
Dark Energy origin is unknown and so describing this mysterious component in large scale structure needs to manipulate our theories in general relativity. Although in most models, dark energy arises from extra terms through modifying Einstein-Hilbert action, maybe its origin traces back to fundamental aspects of ground energy of space-time given in quantum mechanics. Hence, diluting space-time in general relativity with quantum mechanics properties leads to the Karolyhazy relation corresponding energy density of quantum fluctuations of space-time. Through generalized uncertainty principle and an eye to Karolyhazy approach in this study we extend energy density of quantum fluctuations of space-time. Also, the application of this idea is considered in late time evolution and we have shown how extra term in generalized uncertainty principle plays as a plausible interaction term role in suggested model.Keywords: generalized uncertainty principle, karolyhazy approach, agegraphic dark energy, cosmology
Procedia PDF Downloads 7318841 Politics in Academia: How the Diffusion of Innovation Relates to Professional Capital
Authors: Autumn Rooms Cypres, Barbara Driver
Abstract:
The purpose of this study is to extend discussions about innovations and career politics. Research questions that grounded this effort were: How does an academic learn the unspoken rules of the academy? What happens politically to an academic’s career when their research speaks against the grain of society? Do professors perceive signals that it is time to move on to another institution or even to another career? Epistemology and Methods: This qualitative investigation was focused on examining perceptions of academics. Therefore an open-ended field study, based on Grounded Theory, was used. This naturalistic paradigm (Lincoln & Guba,1985) was selected because it tends to understand information in terms of whole, of patterns, and in relations to the context of the environment. The technique for gathering data was the process of semi-structured, in-depth interviewing. Twenty five academics across the United States were interviewed relative to their career trajectories and the politics and opportunities they have encountered in relation to their research efforts. Findings: The analysis of interviews revealed four themes: Academics are beholden to 2 specific networks of power that influence their sense of job security; the local network based on their employing university and the national network of scholars who share the same field of research. The fights over what counts as research can and does drift from the intellectual to the political, and personal. Academic were able to identify specific instances of shunning and or punishment from their colleagues related directly to the dissemination of research that spoke against the grain of the local or national networks. Academics identified specific signals from both of these networks indicating that their career was flourishing or withering. Implications: This research examined insights from those who persevered when the fights over what and who counts drifted from the intellectual to the political, and the personal. Considerations of why such drifts happen were offered in the form of a socio-political construct called Fit, which included thoughts on hegemony, discourse, and identity. This effort reveals the importance of understanding what professional capital is relative to job security. It also reveals that fear is an enmeshed and often unspoken part of the culture of Academia. Further research to triangulate these findings would be helpful within international contexts.Keywords: politics, academia, job security, context
Procedia PDF Downloads 32118840 LiDAR Based Real Time Multiple Vehicle Detection and Tracking
Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt
Abstract:
Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.Keywords: lidar, segmentation, clustering, tracking
Procedia PDF Downloads 42318839 Development of One-Axis Didactic Solar Tracker for Photovoltaic Panels
Authors: L. J. de Bessa Neto, M. R. B. Guerra Vale, F. K. O. M. Varella Guerra
Abstract:
In recent years, solar energy has established itself as one of the main sources of renewable energy, gaining a large space in electricity generation around the world. However, due to the low performance of photovoltaic panels, technologies need to be sought to maximize the production of electricity. In this regard, the present study aims to develop a prototype of solar tracker for didactics applications, controlled with the Arduino® platform, that enables the movement of photovoltaic plates in relation to the sun positions throughout the day through an electromechanical system, optimizing, thus, the efficiency of solar photovoltaic generation and improvements for the photovoltaic effect. The solar tracking technology developed in this work was presented of the shape oral and practical in two middle schools in the municipality of Mossoró/RN, being one of the public network and other of the private network, always keeping the average age of the students, in the case, around 16 years, contemplating an average of 60 students in each of the visits. Thus, it is concluded that the present study contributed substantially to the dissemination of knowledge concerning the photovoltaic solar generation, as well as the study of solar trackers, thus arousing the interest and curiosity of the students regarding the thematic approached.Keywords: alternative energy, solar tracker, energy efficiency, photovoltaic panels
Procedia PDF Downloads 14718838 Effect of Common Yoga Protocol on Reaction Time of Football Players
Authors: Vikram Singh
Abstract:
The objective of the study was to study the effectiveness of common yoga protocol on reaction time (simple visual reaction time-SVRT measured in milliseconds/seconds) of male football players in the age group of 15 to 21 years. The 40 boys were randomly assigned into two groups i.e. control and experimental. SVRT for both the groups were measured on day-1 and post intervention (common yoga protocol here) was measured after 45 days of training to the experimental group only. One way ANOVA (Univariate analysis) and Independent t-test using SPSS 23 statistical package was applied to get and analyze the results. There was a significant difference after 45 days of yoga protocol in simple visual reaction time of experimental group (p = .032), t (33.05) = 3.881, p = .000 (two-tailed). Null hypothesis (that there would be no post measurement differences in reaction times of control and experimental groups) was rejected. Where p<.05. Therefore alternate hypothesis was accepted.Keywords: footballers, t-test, yoga protocol, reaction time
Procedia PDF Downloads 25318837 Bandwidth Efficient Cluster Based Collision Avoidance Multicasting Protocol in VANETs
Authors: Navneet Kaur, Amarpreet Singh
Abstract:
In Vehicular Adhoc Networks, Data Dissemination is a challenging task. There are number of techniques, types and protocols available for disseminating the data but in order to preserve limited bandwidth and to disseminate maximum data over networks makes it more challenging. There are broadcasting, multicasting and geocasting based protocols. Multicasting based protocols are found to be best for conserving the bandwidth. One such protocol named BEAM exists that improves the performance of Vehicular Adhoc Networks by reducing the number of in-network message transactions and thereby efficiently utilizing the bandwidth during an emergency situation. But this protocol may result in multicar chain collision as there was no V2V communication. So, this paper proposes a new protocol named Enhanced Bandwidth Efficient Cluster Based Multicasting Protocol (EBECM) that will overcome the limitations of existing BEAM protocol. And Simulation results will show the improved performance of EBECM in terms of Routing overhead, throughput and PDR when compared with BEAM protocol.Keywords: BEAM, data dissemination, emergency situation, vehicular adhoc network
Procedia PDF Downloads 34818836 Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses
Authors: G. Tamulevičius, A. Serackis, T. Sledevič, D. Navakauskas
Abstract:
We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved.Keywords: transient noise pulses, noise reduction, dynamic time warping, speech recognition
Procedia PDF Downloads 55918835 Smart Surveillance with 5G: A Performance Study in Adama City
Authors: Shenko Chura Aredo, Hailu Belay, Kevin T. Kornegay
Abstract:
In light of Adama City’s smart city development vision, this study thoroughly investigates the performance of smart security systems with Fifth Generation (5G) network capabilities. It can be logistically difficult to install a lot of cabling, particularly in big or dynamic settings. Moreover, latency issues might affect linked systems, making it difficult for them to monitor in real time. Through a focused analysis that employs Adama City as a case study, the performance has been evaluated in terms of spectrum and energy efficiency using empirical data and basic signal processing formulations at different frequency resources. The findings also demonstrate that cameras working at higher 5G frequencies have more capacity than those operating at sub-6 GHz, notwithstanding frequency-related issues. It has also been noted that when the beams of such cameras are adaptively focussed based on the distance of the last cell edge user rather than the maximum cell radius, less energy is required than with conventional fixed power ramping.Keywords: 5G, energy efficiency, safety, smart security, spectral efficiency
Procedia PDF Downloads 1918834 An Improved Approach to Solve Two-Level Hierarchical Time Minimization Transportation Problem
Authors: Kalpana Dahiya
Abstract:
This paper discusses a two-level hierarchical time minimization transportation problem, which is an important class of transportation problems arising in industries. This problem has been studied by various researchers, and a number of polynomial time iterative algorithms are available to find its solution. All the existing algorithms, though efficient, have some shortcomings. The current study proposes an alternate solution algorithm for the problem that is more efficient in terms of computational time than the existing algorithms. The results justifying the underlying theory of the proposed algorithm are given. Further, a detailed comparison of the computational behaviour of all the algorithms for randomly generated instances of this problem of different sizes validates the efficiency of the proposed algorithm.Keywords: global optimization, hierarchical optimization, transportation problem, concave minimization
Procedia PDF Downloads 16218833 Location Management in Wireless Sensor Networks with Mobility
Authors: Amrita Anil Agashe, Sumant Tapas, Ajay Verma Yogesh Sonavane, Sourabh Yeravar
Abstract:
Due to advancement in MEMS technology today wireless sensors network has gained a lot of importance. The wide range of its applications includes environmental and habitat monitoring, object localization, target tracking, security surveillance etc. Wireless sensor networks consist of tiny sensor devices called as motes. The constrained computation power, battery power, storage capacity and communication bandwidth of the tiny motes pose challenging problems in the design and deployment of such systems. In this paper, we propose a ubiquitous framework for Real-Time Tracking, Sensing and Management System using IITH motes. Also, we explain the algorithm that we have developed for location management in wireless sensor networks with the aspect of mobility. Our developed framework and algorithm can be used to detect emergency events and safety threats and provides warning signals to handle the emergency.Keywords: mobility management, motes, multihop, wireless sensor networks
Procedia PDF Downloads 41818832 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 6318831 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms
Authors: Man-Yun Liu, Emily Chia-Yu Su
Abstract:
Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning
Procedia PDF Downloads 32218830 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports
Authors: Jonardan Koner, Avinash Purandare
Abstract:
In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders
Procedia PDF Downloads 13118829 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model
Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini
Abstract:
The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures
Procedia PDF Downloads 12418828 From Name-Calling to Insidious Rhetoric: Construction and Evolution of the Transgender Imagery in News Discourse, 1953-2016
Authors: Hsiao-Yung Wang
Abstract:
This essay aims to examine how the transgender imagery has been constructed in the Taiwanese news media and its evolution from 1953 to 2016. It also explores the discourse patterns and rhetorical strategies in the transgender-related issues which contributed to levels of evaluation in forming ‘social deviance.’ Samples for analysis were selected from mainstream newspapers, including China Times, United Daily and Apple Daily. The time frame for sample selection is from August 1953 (when the first transgender case was reported in Taiwan) to June 2016. To enhance understanding of media representation as nominalistic-based, the author refers to the representative of critical rhetoric Raymie McKerrow for his study on remembrance and forgetfulness in public discourse (especially in his model of ‘critique of domination’); thereby categorizing the 64 years of transgender discourse into five periods: (1) transgender as ‘intersex’ of surgical-reparative medical treatment; (2) transgender as ‘freak gender-bender’ with criminal behaviors; (3) transgender as ‘ladyboy’ (‘katoey in a Thai term) of bar girls or sex workers; (4) transgender as ‘cross dresser’ of transvestite performance; and (5) transgender as ‘life-style or human right’ of spontaneous gender identification. Based on the research findings, this essay argues that the characterization of transgender reporting as a site for the production of compulsory sexism and gender stereotype by the specific forms of name-calling. Besides, the evolution of word-image addressing to transgender issues also pinpoints media as a reflection of fashion of the day. While the transgender imagery might be crystallized as ‘still social problems’ or ‘gender transgression’ in insidious rhetoric; and while the so-called ‘phobia’ persistently embodies in media discourse to exercise name-calling in an ambiguous (rather than in a bullying) way or under the cover of humanist-liberalist rationales, these emergent rhetorical dilemma should be resolved without any delay.Keywords: critical rhetoric, media representation, McKerrow, nominalistic, social deviance, transgender
Procedia PDF Downloads 31218827 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media
Authors: Jinghui Peng, Shanyu Tang, Jia Li
Abstract:
Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.Keywords: steganalysis, security, Fast Fourier Transform, streaming media
Procedia PDF Downloads 14718826 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima
Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez
Abstract:
Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis
Procedia PDF Downloads 32118825 Calycosin Ameliorates Osteoarthritis by Regulating the Imbalance Between Chondrocyte Synthesis and Catabolism
Authors: Hong Su, Qiuju Yan, Wei Du, En Hu, Zhaoyu Yang, Wei Zhang, Yusheng Li, Tao Tang, Wang yang, Shushan Zhao
Abstract:
Osteoarthritis (OA) is a severe chronic inflammatory disease. As the main active component of Astragalus mongholicus Bunge, a classic traditional ethnic herb, calycosin exhibits anti-inflammatory action and its mechanism of exact targets for OA have yet to be determined. In this study, we established an anterior cruciate ligament transection (ACLT) mouse model. Mice were randomized to sham, OA, and calycosin groups. Cartilage synthesis markers type II collagen (Col-2) and SRY-Box Transcription Factor 9 (Sox-9) increased significantly after calycosin gavage. While cartilage matrix degradation index cyclooxygenase-2 (COX-2), phosphor-epidermal growth factor receptor (p-EGFR), and matrix metalloproteinase-9 (MMP9) expression were decreased. With the help of network pharmacology and molecular docking, these results were confirmed in chondrocyte ATDC5 cells. Our results indicated that the calycosin treatment significantly improved cartilage damage, this was probably attributed to reversing the imbalance between chondrocyte synthesis and catabolism.Keywords: calycosin, osteoarthritis, network pharmacology, molecular docking, inflammatory, cyclooxygenase 2
Procedia PDF Downloads 10218824 A Genetic Algorithm Approach for Multi Constraint Team Orienteering Problem with Time Windows
Authors: Uyanga Sukhbaatar, Ahmed Lbath, Mendamar Majig
Abstract:
The Orienteering Problem is the most known example to start modeling tourist trip design problem. In order to meet tourist’s interest and constraint the OP is becoming more and more complicate to solve. The Multi Constraint Team Orienteering Problem with Time Windows is the last extension of the OP which differentiates from other extensions by including more extra associated constraints. The goal of the MCTOPTW is maximizing tourist’s satisfaction score in same time not to violate any of these constraints. This paper presents a genetic algorithmic approach to tackle the MCTOPTW. The benchmark data from literature is tested by our algorithm and the performance results are compared.Keywords: multi constraint team orienteering problem with time windows, genetic algorithm, tour planning system
Procedia PDF Downloads 62618823 The Safety Transfer in Acute Critical Patient by Telemedicine (START) Program at Udonthani General Hospital
Authors: Wisit Wichitkosoom
Abstract:
Objective:The majority of the hisk-risk patients (ST-elevation myocardial infarction (STEMI), Acute cerebrovascular accident, Sepsis, Acute Traumatic patient ) are admitted to district or lacal hospitals (average 1-1.30 hr. from Udonthani general hospital, Northeastern province, Thailand) without proper facilities. The referral system was support to early care and early management at pre-hospital stage and prepare for the patient data to higher hospital. This study assessed the reduction in treatment delay achieved by pre-hospital diagnosis and referral directly to Udonthani General Hospital. Methods and results: Four district or local hospitals without proper facilities for treatment the very high-risk patient were serving the study region. Pre-hospital diagnoses were established with the simple technology such as LINE, SMS, telephone and Fax for concept of LEAN process and then the telemedicine, by ambulance monitoring (ECG, SpO2, BT, BP) in both real time and snapshot mode was administrated during the period of transfer for safety transfer concept (inter-hospital stage). The standard treatment for patients with STEMI, Intracranial injury and acute cerebrovascular accident were done. From 1 October 2012 to 30 September 2013, the 892 high-risk patients transported by ambulance and transferred to Udonthani general hospital were registered. Patients with STEMI diagnosed pre-hospitally and referred directly to the Udonthani general hospital with telemedicine closed monitor (n=248). The mortality rate decreased from 11.69% in 2011 to 6.92 in 2012. The 34 patients were arrested on the way and successful to CPR during transfer with the telemedicine consultation were 79.41%. Conclusion: The proper innovation could apply for health care system. The very high-risk patients must had the closed monitoring with two-way communication for the “safety transfer period”. It could modified to another high-risk group too.Keywords: safety transfer, telemedicine, critical patients, medical and health sciences
Procedia PDF Downloads 30618822 The Environmental Benefits of the Adoption of Emission Control for Locomotives in Brazil
Authors: Rui de Abrantes, André Luiz Silva Forcetto
Abstract:
Air pollution is a big problem in many cities around the world. Brazilian big cities also have this problem, where millions of people are exposed daily to pollutants levels above the recommended by WHO. Brazil has taken several actions to reduce air pollution, among others, controlling the atmospheric emissions from vehicles, non-road mobile machinery, and motorcycles, but on the other side, there are no emissions controls for locomotives, which are exposing the population to tons of pollutants annually. The rail network is not homogeneously distributed in the national territory; it is denser near the big cities, and this way, the population is more exposed to pollutants; apart from that, the government intends to increase the rail network as one of the strategies for greenhouse gas mitigation, complying with the international agreements against the climate changes. This paper initially presents the estimated emissions from locomotive fleets with no emission control and with emission control equivalent to US Tier 3 from 2028 and for the next 20 years. However, we realized that a program equivalent to phase Tier 3 would not be effective, so we proposed a program in two steps that will avoid the release of more than 2.4 million tons of CO and 531,000 tons of hydrocarbons, 3.7 million tons of nitrogen oxides, and 102,000 tons of particulate matter in 20 years.Keywords: locomotives, emission control, air pollution, pollutants emission
Procedia PDF Downloads 4918821 A Fast Algorithm for Electromagnetic Compatibility Estimation for Radio Communication Network Equipment in a Complex Electromagnetic Environment
Authors: C. Temaneh-Nyah
Abstract:
Electromagnetic compatibility (EMC) is the ability of a Radio Communication Equipment (RCE) to operate with a desired quality of service in a given Electromagnetic Environment (EME) and not to create harmful interference with other RCE. This paper presents an algorithm which improves the simulation speed of estimating EMC of RCE in a complex EME, based on a stage by stage frequency-energy criterion of filtering. This algorithm considers different interference types including: Blocking and intermodulation. It consist of the following steps: simplified energy criterion where filtration is based on comparing the free space interference level to the industrial noise, frequency criterion which checks whether the interfering emissions characteristic overlap with the receiver’s channels characteristic and lastly the detailed energy criterion where the real channel interference level is compared to the noise level. In each of these stages, some interference cases are filtered out by the relevant criteria. This reduces the total number of dual and different combinations of RCE involved in the tedious detailed energy analysis and thus provides an improved simulation speed.Keywords: electromagnetic compatibility, electromagnetic environment, simulation of communication network
Procedia PDF Downloads 21818820 Microwave Assisted Extraction (MAE) of Castor Oil from Castor Bean
Authors: Ghazi Faisal Najmuldeen, Rosli Mohd Yunus, Nurfarahin Bt Harun, Mardhiana Binti Ismail
Abstract:
The microwave extraction has attracted great interest among the researchers. The main virtue of the microwave technique is cost-effective, time saving and simple handling procedure. Castor beans was chosen because of its high content in fatty acid, especially ricinoleic acid. The purpose of this research is to extract the castor oil by using the microwave assisted extraction (MAE) using ethanol as solvent and to investigate the influence of extraction time on castor oil yield and to characterize the main composition of the produced castor oil by using the GC-MS. It was found that there is a direct dependence between the oil yield and the time of extraction as it increases from 45% to 58% as the time increase from 10 min to 60 min. The major components of castor oil detected by GC-MS were ricinoleic acid, linoleic acid and oleic acid.Keywords: microwave assisted extraction (MAE), castor oil, ricinoleic acid, linoleic acid
Procedia PDF Downloads 50518819 Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation
Authors: Hamed Alqahtani, Manolya Kavakli-Thorne
Abstract:
The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses.Keywords: disentanglement, face detection, generative adversarial networks, video surveillance
Procedia PDF Downloads 129