Search results for: oriented network
335 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 103334 The Ideal for Building Reservior Under the Ground in Mekong Delta in Vietnam
Authors: Huu Hue Van
Abstract:
The Mekong Delta is the region in southwestern Vietnam where the Mekong River approaches and flow into the sea through a network of distributaries. The Climate Change Research Institute at University of Can Tho, in studying the possible consequences of climate change, has predicted that, many provinces in the Mekong Delta will be flooded by the year 2030. The Mekong Delta lacks fresh water in the dry season. Being served for daily life, industry and agriculture in the dry season, the water is mainly taken from layers of soil contained water under the ground (aquifers) depleted water; the water level in aquifers have decreased. Previously, the Mekong Delta can withstand two bad scenarios in the future: 1) The Mekong Delta will be submerged into the sea again: Due to subsidence of the ground (over-exploitation of groundwater), subsidence of constructions because of the low groundwater level (10 years ago, some of constructions were built on the foundation of Melaleuca poles planted in Mekong Delta, Melaleuca poles have to stay in saturated soil layer fully, if not, they decay easyly; due to the top of Melaleuca poles are higher than the groundwater level, the top of Melaleuca poles will decay and cause subsidence); erosion the river banks (because of the hydroelectric dams in the upstream of the Mekong River is blocking the flow, reducing the concentration of suspended substances in the flow caused erosion the river banks) and the delta will be flooded because of sea level rise (climate change). 2) The Mekong Delta will be deserted: People will migrate to other places to make a living because of no planting due to alum capillary (In Mekong Delta, there is a layer of alum soil under the ground, the elevation of groundwater level is lower than the the elevation of layer of alum soil, alum will be capillary to the arable soil layer); there is no fresh water for cultivation and daily life (because of saline intrusion and groundwater depletion in the aquifers below). Mekong Delta currently has about seven aquifers below with a total depth about 500 m. The water mainly has exploited in the middle - upper Pleistocene aquifer (qp2-3). The major cause of two bad scenarios in the future is over-exploitation of water in aquifers. Therefore, studying and building water reservoirs in seven aquifers will solve many pressing problems such as preventing subsidence, providing water for the whole delta, especially in coastal provinces, favorable to nature, saving land ( if we build the water lake on the surface of the delta, we will need a lot of land), pollution limitation (because when building some hydraulic structures for preventing the salt instrutions and for storing water in the lake on the surface, we cause polluted in the lake)..., It is necessary to build a reservoir under the ground in aquifers in the Mekong Delta. The super-sized reservoir will contribute to the existence and development of the Mekong Delta.Keywords: aquifers, aquifers storage, groundwater, land subsidence, underground reservoir
Procedia PDF Downloads 83333 The Potential of Key Diabetes-related Social Media Influencers in Health Communication
Authors: Zhaozhang Sun
Abstract:
Health communication is essential in promoting healthy lifestyles, preventing unhealthy behaviours, managing disease conditions, and eventually reducing health disparities. Nowadays, social media provides unprecedented opportunities for enhancing health communication for both healthcare providers and people with health conditions, including self-management of chronic conditions such as diabetes. Meanwhile, a special group of active social media users have started playing a pivotal role in providing health ‘solutions’. Such individuals are often referred to as ‘influencers’ because of their ‘central’ position in the online communication system and the persuasive effect their actions and advice may have on audiences' health-related knowledge, attitudes, confidence and behaviours. Work on social media influencers (SMIs) has gained much attention in a specific research field of “influencer marketing”, which mainly focuses on emphasising the use of SMIs to promote or endorse brands’ products and services in the business. Yet to date, a lack of well-studied and empirical evidence has been conducted to guide the exploration of health-related social media influencers. The failure to investigate health-related SMIs can significantly limit the effectiveness of communicating health on social media. Therefore, this article presents a study to identify key diabetes-related SMIs in the UK and the potential implications of information provided by identified social media influencers on their audiences’ diabetes-related knowledge, attitudes and behaviours to bridge the research gap that exists in linking work on influencers in marketing to health communication. The multidisciplinary theories and methods in social media, communication, marketing and diabetes have been adopted, seeking to provide a more practical and promising approach to investigate the potential of social media influencers in health communication. Twitter was chosen as the social media platform to initially identify health influencers and the Twitter API academic was used to extract all the qualitative data. Health-related Influencer Identification Model was developed based on social network analysis, analytic hierarchy process and other screening criteria. Meanwhile, a two-section English-version online questionnaire has been developed to explore the potential implications of social media influencers’ (SMI’s) diabetes-related narratives on the health-related knowledge, attitudes and behaviours (KAB) of their audience. The paper is organised as follows: first, the theoretical and research background of health communication and social media influencers was discussed. Second, the methodology was described by illustrating the model for the identification of health-related SMIs and the development process of the SMIKAB instrument, followed by the results and discussions. The limitations and contributions of this study were highlighted in the summary.Keywords: health communication, Interdisciplinary research, social media influencers, diabetes management
Procedia PDF Downloads 115332 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer
Authors: Ke Zhang, Yongli Zhu
Abstract:
Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm
Procedia PDF Downloads 262331 Sources of Precipitation and Hydrograph Components of the Sutri Dhaka Glacier, Western Himalaya
Authors: Ajit Singh, Waliur Rahaman, Parmanand Sharma, Laluraj C. M., Lavkush Patel, Bhanu Pratap, Vinay Kumar Gaddam, Meloth Thamban
Abstract:
The Himalayan glaciers are the potential source of perennial water supply to Asia’s major river systems like the Ganga, Brahmaputra and the Indus. In order to improve our understanding about the source of precipitation and hydrograph components in the interior Himalayan glaciers, it is important to decipher the sources of moisture and their contribution to the glaciers in this river system. In doing so, we conducted an extensive pilot study in a Sutri Dhaka glacier, western Himalaya during 2014-15. To determine the moisture sources, rain, surface snow, ice, and stream meltwater samples were collected and analyzed for stable oxygen (δ¹⁸O) and hydrogen (δD) isotopes. A two-component hydrograph separation was performed for the glacier stream using these isotopes assuming the contribution of rain, groundwater and spring water contribution is negligible based on field studies and available literature. To validate the results obtained from hydrograph separation using above method, snow and ice melt ablation were measured using a network of bamboo stakes and snow pits. The δ¹⁸O and δD in rain samples range from -5.3% to -20.8% and -31.7% to -148.4% respectively. It is noteworthy to observe that the rain samples showed enriched values in the early season (July-August) and progressively get depleted at the end of the season (September). This could be due to the ‘amount effect’. Similarly, old snow samples have shown enriched isotopic values compared to fresh snow. This could because of the sublimation processes operating over the old surface snow. The δ¹⁸O and δD values in glacier ice samples range from -11.6% to -15.7% and -31.7% to -148.4%, whereas in a Sutri Dhaka meltwater stream, it ranges from -12.7% to -16.2% and -82.9% to -112.7% respectively. The mean deuterium excess (d-excess) value in all collected samples exceeds more than 16% which suggests the predominant moisture source of precipitation is from the Western Disturbances. Our detailed estimates of the hydrograph separation of Sutri Dhaka meltwater using isotope hydrograph separation and glaciological field methods agree within their uncertainty; stream meltwater budget is dominated by glaciers ice melt over snowmelt. The present study provides insights into the sources of moisture, controlling mechanism of the isotopic characteristics of Sutri Dhaka glacier water and helps in understanding the snow and ice melt components in Chandra basin, Western Himalaya.Keywords: D-excess, hydrograph separation, Sutri Dhaka, stable water isotope, western Himalaya
Procedia PDF Downloads 150330 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection
Authors: Mahshid Arabi
Abstract:
With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.Keywords: data protection, digital technologies, information security, modern management
Procedia PDF Downloads 28329 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 73328 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective
Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg
Abstract:
The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.Keywords: Industry 4.0., mass customization, production networks, virtual process-chain
Procedia PDF Downloads 275327 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 283326 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage
Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik
Abstract:
The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis
Procedia PDF Downloads 318325 Analysis of Fuel Adulteration Consequences in Bangladesh
Authors: Mahadehe Hassan
Abstract:
In most countries manufacturing, trading and distribution of gasoline and diesel fuels belongs to the most important sectors of national economy. For Bangladesh, a robust, well-functioning, secure and smartly managed national fuel distribution chain is an essential precondition for achieving Government top priorities in development and modernization of transportation infrastructure, protection of national environment and population health as well as, very importantly, securing due tax revenue for the State Budget. Bangladesh is a developing country with complex fuel supply network, high fuel taxes incidence and – till now - limited possibilities in application of modern, automated technologies for Government national fuel market control. Such environment allows dishonest physical and legal persons and organized criminals to build and profit from illegal fuel distribution schemes and fuel illicit trade. As a result, the market transparency and the country attractiveness for foreign investments, law-abiding economic operators, national consumers, State Budget and the Government ability to finance development projects, and the country at large suffer significantly. Research shows that over 50% of retail petrol stations in major agglomerations of Bangladesh sell adulterated fuels and/or cheat customers on the real volume of the fuel pumped into their vehicles. Other forms of detected fuel illicit trade practices include misdeclaration of fuel quantitative and qualitative parameters during internal transit and selling of non-declared and smuggled fuels. The aim of the study is to recommend the implementation of a National Fuel Distribution Integrity Program (FDIP) in Bangladesh to address and resolve fuel adulteration and illicit trade problems. The program should be customized according to the specific needs of the country and implemented in partnership with providers of advanced technologies. FDIP should enable and further enhance capacity of respective Bangladesh Government authorities in identification and elimination of all forms of fuel illicit trade swiftly and resolutely. FDIP high-technology, IT and automation systems and secure infrastructures should be aimed at the following areas (1) fuel adulteration, misdeclaration and non-declaration; (2) fuel quality and; (3) fuel volume manipulation at retail level. Furthermore, overall concept of FDIP delivery and its interaction with the reporting and management systems used by the Government shall be aligned with and support objectives of the Vision 2041 and Smart Bangladesh Government programs.Keywords: fuel adulteration, octane, kerosene, diesel, petrol, pollution, carbon emissions
Procedia PDF Downloads 72324 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 202323 A Call for Justice and a New Economic Paradigm: Analyzing Counterhegemonic Discourses for Indigenous Peoples' Rights and Environmental Protection in Philippine Alternative Media
Authors: B. F. Espiritu
Abstract:
This paper examines the resistance of the Lumad people, the indigenous peoples in Mindanao, Southern Philippines, and of environmental and human rights activists to the Philippine government's neoliberal policies and their call for justice and a new economic paradigm that will uphold peoples' rights and environmental protection in two alternative media online sites. The study contributes to the body of knowledge on indigenous resistance to neoliberal globalization and the quest for a new economic paradigm that upholds social justice for the marginalized in society, empathy and compassion for those who depend on the land for their survival, and environmental sustainability. The study analyzes the discourses in selected news articles from Davao Today and Kalikasan (translated to English as 'Nature') People's Network for the Environment’s statements and advocacy articles for the Lumad and the environment from 2018 to February 2020. The study reveals that the alternative media news articles and the advocacy articles contain statements that expose the oppression and violation of human rights of the Lumad people, farmers, government environmental workers, and environmental activists as shown in their killings, illegal arrest and detention, displacement of the indigenous peoples, destruction of their schools by the military and paramilitary groups, and environmental plunder and destruction with the government's permit for the entry and operation of extractive and agribusiness industries in the Lumad ancestral lands. Anchored on Christian Fuch's theory of alternative media as critical media and Bert Cammaerts' theorization of alternative media as counterhegemonic media that are part of civil society and form a third voice between state media and commercial media, the study reveals the counterhegemonic discourses of the news and advocacy articles that oppose the dominant economic system of neoliberalism which oppresses the people who depend on the land for their survival. Furthermore, the news and advocacy articles seek to advance social struggles that transform society towards the realization of cooperative potentials or a new economic paradigm that upholds economic democracy, where the local people, including the indigenous people, are economically empowered their environment and protected towards the realization of self-sustaining communities. The study highlights the call for justice, empathy, and compassion for both the people and the environment and the need for a new economic paradigm wherein indigenous peoples and local communities are empowered towards becoming self-sustaining communities in a sustainable environment.Keywords: alternative media, environmental sustainability, human rights, indigenous resistance
Procedia PDF Downloads 143322 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 120321 Assessing the Spatial Distribution of Urban Parks Using Remote Sensing and Geographic Information Systems Techniques
Authors: Hira Jabbar, Tanzeel-Ur Rehman
Abstract:
Urban parks and open spaces play a significant role in improving physical and mental health of the citizens, strengthen the societies and make the cities more attractive places to live and work. As the world’s cities continue to grow, continuing to value green space in cities is vital but is also a challenge, particularly in developing countries where there is pressure for space, resources, and development. Offering equal opportunity of accessibility to parks is one of the important issues of park distribution. The distribution of parks should allow all inhabitants to have close proximity to their residence. Remote sensing and Geographic information systems (GIS) can provide decision makers with enormous opportunities to improve the planning and management of Park facilities. This study exhibits the capability of GIS and RS techniques to provide baseline knowledge about the distribution of parks, level of accessibility and to help in identification of potential areas for such facilities. For this purpose Landsat OLI imagery for year 2016 was acquired from USGS Earth Explorer. Preprocessing models were applied using Erdas Imagine 2014v for the atmospheric correction and NDVI model was developed and applied to quantify the land use/land cover classes including built up, barren land, water, and vegetation. The parks amongst total public green spaces were selected based on their signature in remote sensing image and distribution. Percentages of total green and parks green were calculated for each town of Lahore City and results were then synchronized with the recommended standards. ANGSt model was applied to calculate the accessibility from parks. Service area analysis was performed using Network Analyst tool. Serviceability of these parks has been evaluated by employing statistical indices like service area, service population and park area per capita. Findings of the study may contribute in helping the town planners for understanding the distribution of parks, demands for new parks and potential areas which are deprived of parks. The purpose of present study is to provide necessary information to planners, policy makers and scientific researchers in the process of decision making for the management and improvement of urban parks.Keywords: accessible natural green space standards (ANGSt), geographic information systems (GIS), remote sensing (RS), United States geological survey (USGS)
Procedia PDF Downloads 337320 Pre- and Post-Brexit Experiences of the Bulgarian Working Class Migrants: Qualitative and Quantitative Approaches
Authors: Mariyan Tomov
Abstract:
Bulgarian working class immigrants are increasingly concerned with UK’s recent immigration policies in the context of Brexit. The new ID system would exclude many people currently working in Britain and would break the usual immigrant travel patterns. Post-Brexit Britain would aim to repeal seasonal immigrants. Measures for keeping long-term and life-long immigrants have been implemented and migrants that aim to remain in Britain and establish a household there would be more privileged than temporary or seasonal workers. The results of such regulating mechanisms come at the expense of migrants’ longings for a ‘normal’ existence, especially for those coming from Central and Eastern Europe. Based on in-depth interviews with Bulgarian working class immigrants, the study found out that their major concerns following the decision of the UK to leave the EU are related with the freedom to travel, reside and work in the UK. Furthermore, many of the interviewed women are concerned that they could lose some of the EU's fundamental rights, such as maternity and protection of pregnant women from unlawful dismissal. The soar of commodity prices and university fees and the limited access to public services, healthcare and social benefits in the UK, are also subject to discussion in the paper. The most serious problem, according to the interview, is that the attitude towards Bulgarians and other immigrants in the UK is deteriorating. Both traditional and social media in the UK often portray the migrants negatively by claiming that they take British job positions while simultaneously abuse the welfare system. As a result, the Bulgarian migrants often face social exclusion, which might have negative influence on their health and welfare. In this sense, some of the interviewed stress on the fact that the most important changes after Brexit must take place in British society itself. The aim of the proposed study is to provide a better understanding of the Bulgarian migrants’ economic, health and sociocultural experience in the context of Brexit. Methodologically, the proposed paper leans on: 1. Analysing ethnographic materials dedicated to the pre- and post-migratory experiences of Bulgarian working class migrants, using SPSS. 2. Semi-structured interviews are conducted with more than 50 Bulgarian working class migrants [N > 50] in the UK, between 18 and 65 years. The communication with the interviewees was possible via Viber/Skype or face-to-face interaction. 3. The analysis is guided by theoretical frameworks. The paper has been developed within the framework of the research projects of the National Scientific Fund of Bulgaria: DCOST 01/25-20.02.2017 supporting COST Action CA16111 ‘International Ethnic and Immigrant Minorities Survey Data Network’.Keywords: Bulgarian migrants in UK, economic experiences, sociocultural experiences, Brexit
Procedia PDF Downloads 127319 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 140318 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 128317 Smart Contracts: Bridging the Divide Between Code and Law
Authors: Abeeb Abiodun Bakare
Abstract:
The advent of blockchain technology has birthed a revolutionary innovation: smart contracts. These self-executing contracts, encoded within the immutable ledger of a blockchain, hold the potential to transform the landscape of traditional contractual agreements. This research paper embarks on a comprehensive exploration of the legal implications surrounding smart contracts, delving into their enforceability and their profound impact on traditional contract law. The first section of this paper delves into the foundational principles of smart contracts, elucidating their underlying mechanisms and technological intricacies. By harnessing the power of blockchain technology, smart contracts automate the execution of contractual terms, eliminating the need for intermediaries and enhancing efficiency in commercial transactions. However, this technological marvel raises fundamental questions regarding legal enforceability and compliance with traditional legal frameworks. Moving beyond the realm of technology, the paper proceeds to analyze the legal validity of smart contracts within the context of traditional contract law. Drawing upon established legal principles, such as offer, acceptance, and consideration, we examine the extent to which smart contracts satisfy the requirements for forming a legally binding agreement. Furthermore, we explore the challenges posed by jurisdictional issues as smart contracts transcend physical boundaries and operate within a decentralized network. Central to this analysis is the examination of the role of arbitration and dispute resolution mechanisms in the context of smart contracts. While smart contracts offer unparalleled efficiency and transparency in executing contractual terms, disputes inevitably arise, necessitating mechanisms for resolution. We investigate the feasibility of integrating arbitration clauses within smart contracts, exploring the potential for decentralized arbitration platforms to streamline dispute resolution processes. Moreover, this paper explores the implications of smart contracts for traditional legal intermediaries, such as lawyers and judges. As smart contracts automate the execution of contractual terms, the role of legal professionals in contract drafting and interpretation may undergo significant transformation. We assess the implications of this paradigm shift for legal practice and the broader legal profession. In conclusion, this research paper provides a comprehensive analysis of the legal implications surrounding smart contracts, illuminating the intricate interplay between code and law. While smart contracts offer unprecedented efficiency and transparency in commercial transactions, their legal validity remains subject to scrutiny within traditional legal frameworks. By navigating the complex landscape of smart contract law, we aim to provide insights into the transformative potential of this groundbreaking technology.Keywords: smart-contracts, law, blockchain, legal, technology
Procedia PDF Downloads 43316 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 229315 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 123314 Inertial Spreading of Drop on Porous Surfaces
Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi
Abstract:
The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium
Procedia PDF Downloads 137313 Analyzing the Untenable Corruption Intricate Patterns in Africa and Combating Strategies for the Efficiency of Public Sector Supply Chains
Authors: Charles Mazhazhate
Abstract:
This study interrogates and analyses the intricate kin- and- kith network patterns of corruption and mismanagement of resources prevalent in public sector supply chains bedeviling the developing economies of Sub-Saharan Africa with particular reference to Zimbabwe. This is forcing governments to resort to harsh fiscal policies that see their citizens paying high taxes against a backdrop of incomes below the poverty datum line, and this negatively affects their quality of life. The corporate world is also affected by the various tax-regime instituted. Mismanagement of resources and corrupt practices are rampant in state-owned enterprises to the extent that institutional policies, procedures, and practices are often flouted for the benefit of a clique of individuals. This interwoven in kith and kin blood human relations in organizations where appointments to critical positions are based on ascribed status. People no longer place value in their systems to make them work thereby violating corporate governance principles. Greediness and ‘unholy friendship connections’ are instrumental in fueling the employment of people who know each other from their discrete backgrounds. Such employments or socio-metric unions are meant to protect those at the top by giving them intelligent information through spying on what other subordinates are doing inside and outside the organization. This practice has led to the underperforming of organizations as those employees with connections and their upper echelons favorites connive to abuse resources for their own benefit. Even if culprits are known, no draconian measures are employed as a deterrence measure. Public value along public sector supply chains is lost. The study used a descriptive case study research design on fifty organizations in Zimbabwe mainly state-owned enterprises. Both qualitative and quantitative instrumentations were used. Both Snowball and random sampling techniques were used. The study found out that in all the fifty SOEs, there were employees in key positions related to top management, with tentacles feeding into the law enforcement agents, judiciary, security systems, and the executive. Such employees in public seem not to know each other with but would be involved in dirty scams and then share the proceeds with top people behind the scenes. The study also established that the same employees do not have the necessary competencies, qualifications, abilities, and capabilities to be in those positions. This culture is now strong that it is difficult to bust. The study recommends recruitment of all employees through an independent employment bureau to ensure strategic fit.Keywords: corruption, state owned enterprises, strategic fit, public sector supply chains, efficiency
Procedia PDF Downloads 160312 The Igbo People's Dual Religion Identity on Rite of Marriage in Imo State
Authors: Henry Okechukwu Onyeiwu, Arfah Ab. Majid
Abstract:
To fully understand the critical role of marriage in society, it is important to view it as a social institution that provides some basic social needs for society. A ‘social institution’ is the network of shared meanings, norms, definitions, expectations, and understandings held by the members of society. It is what guides and governs how the members of the society are expected to act and interact, what is socially desirable and legitimate, what they should be striving for, and so on. One of the major social institutions is marriage. Marriage is and has often focused on children and what is best for them because the rising generation literally is the future of every society. However, according to the aforementioned definition, which notes that marriage may also be a union between two persons of the same sex with legal support, this study stands with the definitions that are based on marriage being a union between a man and woman that is the most appropriate in Igbo land and not the other way round. The issue to be evaluated concerns marriage as it associates with Igbo Catholic Christians in Nigeria. Pasts of Igbo culture should be better organized into the Christian faith. Igbo Christians actually convey a significant number of their customary thoughts, customs, and social qualities, particularly regarding marriage, in the aftermath of switching to Christianity. The analyst agrees that marriage among Igbo Christians warrants adequate evolution. This study, therefore, concentrates on the Igbo community’s interpretation of the concept of culture and religion and the religious implications of traditional marriage and Christian marriage ceremonies in Igbo. The research design of this study is a qualitative design that provides in-depth information on the dual religious identity of the Igbo people on the rite of marriage in Imo state. The study population was composed of both male and female members from each selected local government area in Imo State. Thematic analysis was used to elaborate on the result from the respondents. This survey found that reputation is a major concern for Ibo people. Parental discomfort can lead to the use of coping strategies such as displacement, in which parents pass on their own vulnerable sentiments to their children. Those who participate in marriage negotiations feel the pain of their parents because they are unable to communicate their own feelings. As a result, participants experience increased stress and a range of negative emotions related to their marriage, including worry, dissatisfaction, and ambivalence. It was concluded that when it comes to Igbo culture, marriage is seen as a need for the continuation of the family’s lineage of descent, according to the outcome. The Task at hand was to discover how the locals preparing to get married define the impending transition. Imo State is home to the practice of Igba-nkwu, where the woman is either inherited or taken in the place of another.Keywords: Igbo, culture, Christianity, traditional marriage, Christian wedding
Procedia PDF Downloads 161311 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery
Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald
Abstract:
Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography
Procedia PDF Downloads 148310 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 69309 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 107308 Efficacy of Corporate Social Responsibility in Corporate Governance Structures of Family Owned Business Groups in India
Authors: Raveena Naz
Abstract:
The concept of ‘Corporate Social Responsibility’ (CSR) has often relied on firms thinking beyond their economic interest despite the larger debate of shareholder versus stakeholder interest. India gave legal recognition to CSR in the Companies Act, 2013 which promises better corporate governance. CSR in India is believed to be different for two reasons: the dominance of family business and the history of practice of social responsibility as a form of philanthropy (mainly among the family business). This paper problematises the actual structure of business houses in India and the role of CSR in India. When the law identifies each company as a separate business entity, the economics of institutions emphasizes the ‘business group’ consisting of a plethora of firms as the institutional organization of business. The capital owned or controlled by the family group is spread across the firms through the interholding (interlocked holding) structures. This creates peculiar implications for CSR legislation in India. The legislation sets criteria for individual firms to undertake liability of mandatory CSR if they are above a certain threshold. Within this framework, the largest family firms which are all part of family owned business groups top the CSR expenditure list. The interholding structures, common managers, auditors and series of related party transactions among these firms help the family to run the business as a ‘family business’ even when the shares are issued to the public. This kind of governance structure allows family owned business group to show mandatory compliance of CSR even when they actually spend much less than what is prescribed by law. This aspect of the family firms is not addressed by the CSR legislation in particular or corporate governance legislation in general in India. The paper illustrates this with an empirical study of one of the largest family owned business group in India which is well acclaimed for its CSR activities. The individual companies under the business group are identified, shareholding patterns explored, related party transactions investigated, common managing authorities are identified; and assets, liabilities and profit/loss accounting practices are analysed. The data has been mainly collected from mandatory disclosures in the annual reports and financial statements of the companies within the business group accessed from the official website of the ultimate controlling authority. The paper demonstrates how the business group through these series of shareholding network reduces its legally mandated CSR liability. The paper thus indicates the inadequacy of CSR legislation in India because the unit of compliance is an individual firm and it assumes that each firm is independent and only connected to each other through market dealings. The law does not recognize the inter-connections of firms in corporate governance structures of family owned business group and hence is inadequate in its design to effect the threshold level of CSR expenditure. This is the central argument of the paper.Keywords: business group, corporate governance, corporate social responsibility, family firm
Procedia PDF Downloads 280307 Environmental Aspects of Alternative Fuel Use for Transport with Special Focus on Compressed Natural Gas (CNG)
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
The history of gaseous fuel use in the motive power of vehicles dates back to the second half of the nineteenth century, and thus the beginnings of the automotive industry. The engines were powered by coal gas and became the prototype for internal combustion engines built so far. It can thus be considered that this construction gave rise to the automotive industry. As the socio-economic development advances, so does the number of motor vehicles. Although, due to technological progress in recent decades, the emissions generated by internal combustion engines of cars have been reduced, a sharp increase in the number of cars and the rapidly growing traffic are an important source of air pollution and a major cause of acoustic threat, in particular in large urban agglomerations. One of the solutions, in terms of reducing exhaust emissions and improving air quality, is a more extensive use of alternative fuels: CNG, LNG, electricity and hydrogen. In the case of electricity use for transport, it should be noted that the environmental outcome depends on the structure of electricity generation. The paper shows selected regulations affecting the use of alternative fuels for transport (including Directive 2014/94/EU) and its dynamics between 2000 and 2015 in Poland and selected EU countries. The paper also gives a focus on the impact of alternative fuels on the environment by comparing the volume of individual emissions (compared to the emissions from conventional fuels: petrol and diesel oil). Bearing in mind that the extent of various alternative fuel use is determined in first place by economic conditions, the article describes the price relationships between alternative and conventional fuels in Poland and selected EU countries. It is pointed out that although Poland has a wealth of experience in using methane alternative fuels for transport, one of the main barriers to their development in Poland is the extensive use of LPG. In addition, a poorly developed network of CNG stations in Poland, which does not allow easy transport, especially in the northern part of the country, is a serious problem to a further development of CNG use as fuel for transport. An interesting solution to this problem seems to be the use of home CNG filling stations: Home Refuelling Appliance (HRA, refuelling time 8-10 hours) and Home Refuelling Station (HRS, refuelling time 8-10 minutes). The team is working on HRA and HRS technologies. The article also highlights the impact of alternative fuel use on energy security by reducing reliance on imports of crude oil and petroleum products.Keywords: alternative fuels, CNG (Compressed Natural Gas), CNG stations, LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles), pollutant emissions
Procedia PDF Downloads 226306 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing
Authors: Muhammad Abdul Basit Memon
Abstract:
The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing
Procedia PDF Downloads 324