Search results for: discontinued operations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1739

Search results for: discontinued operations

59 Impact of Urban Migration on Caste: Rohinton Mistry’s a Fine Balance and Rural-to-Urban Caste Migration in India

Authors: Mohua Dutta

Abstract:

The primary aim of this research paper is to investigate the forced urban migration of Dalits in India who are fleeing caste persecution in rural areas. This paper examines the relationship between caste and rural-to-urban internal migration in India using a literary text, Rohinton Mistry’s A Fine Balance, highlighting the challenges faced by Dalits in rural areas that force them to migrate to urban areas. Despite the prevalence of such discussions in Dalit autobiographies written in vernacular languages, there is a lack of discussion regarding caste migration in Indian English Literature, including this present text, as evidenced by the existing critical interpretations of the novel, which this paper seeks to rectify. The primary research question is how urban migration affects caste system in India and why rural-to-urban caste migration occurs. The purpose of this paper is to better understand the reasons for Dalit migration, the challenges they face in rural and urban areas, and the lingering influence of caste in both rural and urban areas. The study reveals that the promise of mobility and emancipation provided by class operations drives rural-to-urban caste migration in India, but it also reveals that caste marginalization in rural areas is closely linked to class marginalization and other forms of subalternity in urban areas. Moreover, the caste system persists in urban areas as well, making Dalit migrants more vulnerable to social, political, and economic discrimination. The reason for this is that, despite changes in profession and urban migration, the trapped structure of caste capital and family networks exposes migrants to caste and class oppressions. To reach its conclusion, this study employs a variety of methodologies. Discourse analysis is used to investigate the current debates and narratives surrounding caste migration. Critical race theory, specifically intersectional theory and social constructivism, aids in comprehending the complexities of caste, class, and migration. Mistry's novel is subjected to textual analysis in order to identify and interpret references to caste migration. Secondary data, such as theoretical understanding of the caste system in operation and scholarly works on caste migration, are also used to support and strengthen the findings and arguments presented in the paper. The study concludes that rural-to-urban caste migration in India is primarily motivated by the promise of socioeconomic mobility and emancipation offered by urban spaces. However, the caste system persists in urban areas, resulting in the continued marginalisation and discrimination of Dalit migrants. The study also highlights the limitations of urban migration in providing true emancipation for Dalit migrants, as they remain trapped within caste and family network structures. Overall, the study raises awareness of the complexities surrounding caste migration and its impact on the lives of India's marginalised communities. This study contributes to the field of Migration Studies by shedding light on an often-overlooked issue: Dalit migration. It challenges existing literary critical interpretations by emphasising the significance of caste migration in Indian English Literature. The study also emphasises the interconnectedness of caste and class, broadening understanding of how these systems function in both rural and urban areas.

Keywords: rural-to-urban caste migration in india, internal migration in india, caste system in india, dalit movement in india, rooster coop of caste and class, urban poor as subalterns

Procedia PDF Downloads 72
58 Stakeholder Mapping and Requirements Identification for Improving Traceability in the Halal Food Supply Chain

Authors: Laila A. H. F. Dashti, Tom Jackson, Andrew West, Lisa Jackson

Abstract:

Traceability systems are important in the agri-food and halal food sectors for monitoring ingredient movements, tracking sources, and ensuring food integrity. However, designing a traceability system for the halal food supply chain is challenging due to diverse stakeholder requirements and complex needs. Existing literature on stakeholder mapping and identifying requirements for halal food supply chains is limited. To address this gap, a pilot study was conducted to identify the objectives, requirements, and recommendations of stakeholders in the Kuwaiti halal food industry. The study collected data through semi-structured interviews with an international halal food manufacturer based in Kuwait. The aim was to gain a deep understanding of stakeholders' objectives, requirements, processes, and concerns related to the design of a traceability system in the country's halal food sector. Traceability systems are being developed and tested in the agri-food and halal food sectors due to their ability to monitor ingredient movements, track sources, and detect potential issues related to food integrity. Designing a traceability system for the halal food supply chain poses significant challenges due to diverse stakeholder requirements and the complexity of their needs (including varying food ingredients, different sources, destinations, supplier processes, certifications, etc.). Achieving a halal food traceability solution tailored to stakeholders' requirements within the supply chain necessitates prior knowledge of these needs. Although attempts have been made to address design-related issues in traceability systems, literature on stakeholder mapping and identification of requirements specific to halal food supply chains is scarce. Thus, this pilot study aims to identify the objectives, requirements, and recommendations of stakeholders in the halal food industry. The paper presents insights gained from the pilot study, which utilized semi-structured interviews to collect data from a Kuwait-based international halal food manufacturer. The objective was to gain an in-depth understanding of stakeholders' objectives, requirements, processes, and concerns pertaining to the design of a traceability system in Kuwait's halal food sector. The stakeholder mapping results revealed that government entities, food manufacturers, retailers, and suppliers are key stakeholders in Kuwait's halal food supply chain. Lessons learned from this pilot study regarding requirement capture for traceability systems include the need to streamline communication, focus on communication at each level of the supply chain, leverage innovative technologies to enhance process structuring and operations and reduce halal certification costs. The findings also emphasized the limitations of existing traceability solutions, such as limited cooperation and collaboration among stakeholders, high costs of implementing traceability systems without government support, lack of clarity regarding product routes, and disrupted communication channels between stakeholders. These findings contribute to a broader research program aimed at developing a stakeholder requirements framework that utilizes "business process modelling" to establish a unified model for traceable stakeholder requirements.

Keywords: supply chain, traceability system, halal food, stakeholders’ requirements

Procedia PDF Downloads 113
57 Fully Autonomous Vertical Farm to Increase Crop Production

Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek

Abstract:

New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.

Keywords: automation, vertical farming, robot, artificial intelligence, vision, control

Procedia PDF Downloads 39
56 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments

Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño

Abstract:

Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.

Keywords: heat transfer, heat treatment, mango, modeling and simulation

Procedia PDF Downloads 247
55 A Proposal of a Strategic Framework for the Development of Smart Cities: The Argentinian Case

Authors: Luis Castiella, Mariano Rueda, Catalina Palacio

Abstract:

The world’s rapid urbanisation represents an excellent opportunity to implement initiatives that are oriented towards a country’s general development. However, this phenomenon has created considerable pressure on current urban models, pushing them nearer to a crisis. As a result, several factors usually associated with underdevelopment have been steadily rising. Moreover, actions taken by public authorities have not been able to keep up with the speed of urbanisation, which has impeded them from meeting the demands of society, responding with reactionary policies instead of with coordinated, organised efforts. In contrast, the concept of a Smart City which emerged around two decades ago, in principle, represents a city that utilises innovative technologies to remedy the everyday issues of the citizen, empowering them with the newest available technology and information. This concept has come to adopt a wider meaning, including human and social capital, as well as productivity, economic growth, quality of life, environment and participative governance. These developments have also disrupted the management of institutions such as academia, which have become key in generating scientific advancements that can solve pressing problems, and in forming a specialised class that is able to follow up on these breakthroughs. In this light, the Ministry of Modernisation of the Argentinian Nation has created a model that is rooted in the concept of a ‘Smart City’. This effort considered all the dimensions that are at play in an urban environment, with careful monitoring of each sub-dimensions in order to establish the government’s priorities and improving the effectiveness of its operations. In an attempt to ameliorate the overall efficiency of the country’s economic and social development, these focused initiatives have also encouraged citizen participation and the cooperation of the private sector: replacing short-sighted policies with some that are coherent and organised. This process was developed gradually. The first stage consisted in building the model’s structure; the second, at applying the method created on specific case studies and verifying that the mechanisms used respected the desired technical and social aspects. Finally, the third stage consists in the repetition and subsequent comparison of this experiment in order to measure the effects on the ‘treatment group’ over time. The first trial was conducted on 717 municipalities and evaluated the dimension of Governance. Results showed that levels of governmental maturity varied sharply with relation to size: cities with less than 150.000 people had a strikingly lower level of governmental maturity than cities with more than 150.000 people. With the help of this analysis, some important trends and target population were made apparent, which enabled the public administration to focus its efforts and increase its probability of being successful. It also permitted to cut costs, time, and create a dynamic framework in tune with the population’s demands, improving quality of life with sustained efforts to develop social and economic conditions within the territorial structure.

Keywords: composite index, comprehensive model, smart cities, strategic framework

Procedia PDF Downloads 176
54 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction

Authors: Yan Zhang

Abstract:

Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.

Keywords: Internet of Things, machine learning, predictive maintenance, streaming data

Procedia PDF Downloads 386
53 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling

Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes

Abstract:

Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.

Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling

Procedia PDF Downloads 88
52 Ethical Decision-Making in AI and Robotics Research: A Proposed Model

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.

Keywords: ethical decision making, artificial intelligence, robotics, research

Procedia PDF Downloads 79
51 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks

Authors: Nicholas Aerne, John P. Parmigiani

Abstract:

There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.

Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply

Procedia PDF Downloads 204
50 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity

Authors: Julia Baker

Abstract:

Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.

Keywords: biodiversity offsetting, development, nature conservation planning, net gain

Procedia PDF Downloads 195
49 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 229
48 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets

Authors: Selin Guney, Andres Riquelme

Abstract:

Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.

Keywords: commodity, forecast, fuzzy, Markov

Procedia PDF Downloads 217
47 Bio-Hub Ecosystems: Profitability through Circularity for Sustainable Forestry, Energy, Agriculture and Aquaculture

Authors: Kimberly Samaha

Abstract:

The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding biomass as a feedstock for power plants. Yet the lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. This study analyzed data and submittals to the Born Global Maine Innovation Challenge. The Innovation Challenge was a global innovation challenge to identify process innovations that could address a ‘whole-tree’ approach of maximizing the products, byproducts, energy value and process slip-streams into a circular zero-waste design. Participating companies were at various stages of developing bioproducts and included biofuels, lignin-based products, carbon capture platforms and biochar used as both a filtration medium and as a soil amendment product. This case study shows the QCA (Qualitative Comparative Analysis) methodology of the prequalification process and the resulting techno-economic model that was developed for the maximizing profitability of the Bio-Hub Ecosystem through continuous expansion of system waste streams into valuable process inputs for co-hosts. A full site plan for the integration of co-hosts (biorefinery, land-based shrimp and salmon aquaculture farms, a tomato green-house and a hops farm) at an operating forestry-based biomass to energy plant in West Enfield, Maine USA. This model and process for evaluating the profitability not only proposes models for integration of forestry, aquaculture and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. In this particular study, profitability is assessed at two levels CAPEX (Capital Expenditures) and in OPEX (Operating Expenditures). Given that these projects start with repurposing facilities where the industrial level infrastructure is already built, permitted and interconnected to the grid, the addition of co-hosts first realizes a dramatic reduction in permitting, development times and costs. In addition, using the biomass energy plant’s waste streams such as heat, hot water, CO₂ and fly ash as valuable inputs to their operations and a significant decrease in the OPEX costs, increasing overall profitability to each of the co-hosts bottom line. This case study utilizes a proprietary techno-economic model to demonstrate how utilizing waste streams of a biomass energy plant and/or biorefinery, results in significant reduction in OPEX for both the biomass plants and the agriculture and aquaculture co-hosts. Economically viable Bio-Hubs with favorable environmental and community impacts may prove critical in garnering local and federal government support for pilot programs and more wide-scale adoption, especially for those living in severely economically depressed rural areas where aging industrial sites have been shuttered and local economies devastated.

Keywords: bio-economy, biomass energy, financing, zero-waste

Procedia PDF Downloads 134
46 Drones, Rebels and Bombs: Explaining the Role of Private Security and Expertise in a Post-piratical Indian Ocean

Authors: Jessica Kate Simonds

Abstract:

The last successful hijacking perpetrated by Somali pirates in 2012 represented a critical turning point for the identity and brand of Indian Ocean (IO) insecurity, coined in this paper as the era of the post-piratical. This paper explores the broadening of the PMSC business model to account and contribute to the design of a new IO security environment that prioritises foreign and insurgency drone activity and Houthi rebel operations as the main threat to merchant shipping in the post-2012 era. This study is situated within a longer history of analysing maritime insecurity and also contributes a bespoke conceptual framework that understands the sea as a space that is produced and reproduced relative to existing and emerging threats to merchant shipping based on bespoke models of information sharing and intelligence acquisition. This paper also makes a prominent empirical contribution by drawing on a post-positivist methodology, data drawn from original semi-structured interviews with senior maritime insurers and active merchant seafarers that is triangulated with industry-produced guidance such as the BMP series as primary data sources. Each set is analysed through qualitative discourse and content analysis and supported by the quantitative data sets provided by the IMB Piracy Reporting center and intelligence networks. This analysis reveals that mechanisms such as the IGP&I Maritime Security Committee and intelligence divisions of PMSC’s have driven the exchanges of knowledge between land and sea and thus the reproduction of the maritime security environment through new regulations and guidance to account dones, rebels and bombs as the key challenges in the IO, beyond piracy. A contribution of this paper is the argument that experts who may not be in the highest-profile jobs are the architects of maritime insecurity based on their detailed knowledge and connections to vessels in transit. This paper shares the original insights of those who have served in critical decision making spaces to demonstrate that the development and refinement of industry produced deterrence guidance that has been accredited to the mitigation of piracy, have shaped new editions such as BMP 5 that now serve to frame a new security environment that prioritises the mitigation of risks from drones and WBEID’s from both state and insurgency risk groups. By highlighting the experiences and perspectives of key players on both land and at sea, the key finding of this paper is outlining that as pirates experienced a financial boom by profiteering from their bespoke business model during the peak of successful hijackings, the private security market encountered a similar level of financial success and guaranteed risk environment in which to prospect business. Thus, the reproduction of the Indian Ocean as a maritime security environment reflects a new found purpose for PMSC’s as part of the broader conglomerate of maritime insurers, regulators, shipowners and managers who continue to redirect the security consciousness and IO brand of insecurity.

Keywords: maritime security, private security, risk intelligence, political geography, international relations, political economy, maritime law, security studies

Procedia PDF Downloads 184
45 Covid -19 Pandemic and Impact on Public Spaces of Tourism and Hospitality in Dubai- an Exploratory Study from a Design Perspective

Authors: Manju Bala Jassi

Abstract:

The Covid 19 pandemic has badly mauled Dubai’s GDP heavily dependent on hospitality, tourism, entertainment, logistics, property and the retail sectors. In the context of the World Health protocols on social distancing for better maintenance of health and hygiene, the revival of the battered tourism and hospitality sectors has serious lessons for designers- interiors and public places. The tangible and intangible aesthetic elements and design –ambiance, materials, furnishings, colors, lighting and interior with architectural design issues of tourism and hospitality need a rethink to ensure a memorable tourist experience. Designers ought to experiment with sustainable places of tourism and design, develop, build and projects are aesthetic and leave as little negative impacts on the environment and public as possible. In short, they ought to conceive public spaces that makes use of little untouched materials and energy, and creates pollution and waste that are minimal. The spaces can employ healthier and more resource-efficient prototypes of construction, renovation, operation, maintenance, and demolition and thereby mitigate the environment impacts of the construction activities and it is sustainable These measures encompass the hospitality sector that includes hotels and restaurants which has taken the hardest fall from the pandemic. The paper sought to examine building energy efficiency and materials and design employed in public places, green buildings to achieve constructive sustainability and to establish the benefits of utilizing energy efficiency, green materials and sustainable design; to document diverse policy interventions, design and Spatial dimensions of tourism and hospitality sectors; to examine changes in the hospitality, aviation sector especially from a design perspective regarding infrastructure or operational constraints and additional risk-mitigation measures; to dilate on the nature of implications for interior designers and architects to design public places to facilitate sustainable tourism and hospitality while balancing convenient space and their operations' natural surroundings. The qualitative research approach was adopted for the study. The researcher collected and analyzed data in continuous iteration. Secondary data was collected from articles in journals, trade publications, government reports, newspaper/ magazine articles, policy documents etc. In depth interviews were conducted with diverse stakeholders. Preliminary data indicates that designers have started imagining public places of tourism and hospitality against the backdrop of the government push and WHO guidelines. For instance, with regard to health, safety, hygiene and sanitation, Emirates, the Dubai-based airline has augmented health measures at the Dubai International Airport and on board its aircraft. It has leveraged high tech/ Nano-tech, social distancing to encourage least human contact, flexible design layouts to limit the occupancy. The researcher organized the data into thematic categories and found that the Government of Dubai has initiated comprehensive measures in the hospitality, tourism and aviation sectors in compliance with the WHO guidelines.

Keywords: Covid 19, design, Dubai, hospitality, public spaces, tourism

Procedia PDF Downloads 166
44 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 43
43 Ragging and Sludging Measurement in Membrane Bioreactors

Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd

Abstract:

Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.

Keywords: clogging, membrane bioreactors, ragging, sludge

Procedia PDF Downloads 178
42 Systematic Review of Technology-Based Mental Health Solutions for Modelling in Low and Middle Income Countries

Authors: Mukondi Esther Nethavhakone

Abstract:

In 2020 World Health Organization announced the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), also known as Coronavirus disease 2019 (COVID-19) pandemic. To curb or contain the spread of the novel coronavirus (COVID 19), global governments implemented social distancing and lockdown regulations. Subsequently, it was no longer business as per usual, life as we knew it had changed, and so many aspects of people's lives were negatively affected, including financial and employment stability. Mainly, because companies/businesses had to put their operations on hold, some had to shut down completely, resulting in the loss of income for many people globally. Finances and employment insecurities are some of the issues that exacerbated many social issues that the world was already faced with, such as school drop-outs, teenage pregnancies, sexual assaults, gender-based violence, crime, child abuse, elderly abuse, to name a few. Expectedly the majority of the population's mental health state was threatened. This resulted in an increased number of people seeking mental healthcare services. The increasing need for mental healthcare services in Low and Middle-income countries proves to be a challenge because it is a well-known fact due to financial constraints and not well-established healthcare systems, mental healthcare provision is not as prioritised as the primary healthcare in these countries. It is against this backdrop that the researcher seeks to find viable, cost-effective, and accessible mental health solutions for low and middle-income countries amid the pressures of any pandemic. The researcher will undertake a systematic review of the technology-based mental health solutions that have been implemented/adopted by developed countries during COVID 19 lockdown and social distancing periods. This systematic review study aims to determine if low and middle-income countries can adopt the cost-effective version of digital mental health solutions for the healthcare system to adequately provide mental healthcare services during critical times such as pandemics (when there's an overwhelming diminish in mental health globally). The researcher will undertake a systematic review study through mixed methods. It will adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The mixed-methods uses findings from both qualitative and quantitative studies in one review study. It will be beneficial to conduct this kind of study using mixed methods because it is a public health topic that involves social interventions and it is not purely based on medical interventions. Therefore, the meta-ethnographic (qualitative data) analysis will be crucial in understanding why and which digital methods work and for whom does it work, rather than only the meta-analysis (quantitative data) providing what digital mental health methods works. The data collection process will be extensive, involving the development of a database, table of summary of evidence/findings, and quality assessment process lastly, The researcher will ensure that ethical procedures are followed and adhered to, ensuring that sensitive data is protected and the study doesn't pose any harm to the participants.

Keywords: digital, mental health, covid, low and middle-income countries

Procedia PDF Downloads 95
41 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data

Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira

Abstract:

Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.

Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC

Procedia PDF Downloads 126
40 Simultech - Innovative Country-Wide Ultrasound Training Center

Authors: Yael Rieder, Yael Gilboa, S. O. Adva, Efrat Halevi, Ronnie Tepper

Abstract:

Background: Operation of ultrasound equipment is a core skill for many clinical specialties. As part of the training program at -Simultech- a simulation center for Ob\Gyn at the Meir Medical Center, Israel, teaching how to operate ultrasound equipment requires dealing with misunderstandings of spatial and 3D orientation, failure of the operator to hold a transducer correctly, and limited ability to evaluate the data on the screen. We have developed a platform intended to endow physicians and sonographers with clinical and operational skills of obstetric ultrasound. Simultech's simulations are focused on medical knowledge, risk management, technology operations and physician-patient communication. The simulations encompass extreme work conditions. Setup: Between eight and ten of the eight hundred and fifty physicians and sonographers of the Clalit health services from seven hospitals and eight community centers across Israel, participate in individual Ob/Gyn training sessions each week. These include Ob/Gyn specialists, experts, interns, and sonographers. Innovative teaching and training methodologies: The six-hour training program includes: (1) An educational computer program that challenges trainees to deal with medical questions based upon ultrasound pictures and films. (2) Sophisticated hands-on simulators that challenge the trainees to practice correct grip of the transducer, elucidate pathology, and practice daily tasks such as biometric measurements and analysis of sonographic data. (3) Participation in a video-taped simulation which focuses on physician-patient communications. In the simulation, the physician is required to diagnose the clinical condition of a hired actress based on the data she provides and by evaluating the assigned ultrasound films accordingly. Giving ‘bad news’ to the patient may put the physician in a stressful situation that must be properly managed. (4) Feedback at the end of each phase is provided by a designated trainer, not a physician, who is specially qualified by Ob\Gyn senior specialists. (5) A group exercise in which the trainer presents a medico-legal case in order to encourage the participants to use their own experience and knowledge to conduct a productive ‘brainstorming’ session. Medical cases are presented and analyzed by the participants together with the trainer's feedback. Findings: (1) The training methods and content that Simultech provides allows trainees to review their medical and communications skills. (2) Simultech training sessions expose physicians to both basic and new, up-to-date cases, refreshing and expanding the trainee's knowledge. (3) Practicing on advanced simulators enables trainees to understand the sonographic space and to implement the basic principles of ultrasound. (4) Communications simulations were found to be beneficial for trainees who were unaware of their interpersonal skills. The trainer feedback, supported by the recorded simulation, allows the trainee to draw conclusions about his performance. Conclusion: Simultech was found to contribute to physicians at all levels of clinical expertise who deal with ultrasound. A break in daily routine together with attendance at a neutral educational center can vastly improve performance and outlook.

Keywords: medical training, simulations, ultrasound, Simultech

Procedia PDF Downloads 280
39 Pre-conditioning and Hot Water Sanitization of Reverse Osmosis Membrane for Medical Water Production

Authors: Supriyo Das, Elbir Jove, Ajay Singh, Sophie Corbet, Noel Carr, Martin Deetz

Abstract:

Water is a critical commodity in the healthcare and medical field. The utility of medical-grade water spans from washing surgical equipment, drug preparation to the key element of life-saving therapy such as hydrotherapy and hemodialysis for patients. A properly treated medical water reduces the bioburden load and mitigates the risk of infection, ensuring patient safety. However, any compromised condition during the production of medical-grade water can create a favorable environment for microbial growth putting patient safety at high risk. Therefore, proper upstream treatment of the medical water is essential before its application in healthcare, pharma and medical space. Reverse Osmosis (RO) is one of the most preferred treatments within healthcare industries and is recommended by all International Pharmacopeias to achieve the quality level demanded by global regulatory bodies. The RO process can remove up to 99.5% of constituents from feed water sources, eliminating bacteria, proteins and particles sizes of 100 Dalton and above. The combination of RO with other downstream water treatment technologies such as Electrodeionization and Ultrafiltration meet the quality requirements of various pharmacopeia monographs to produce highly purified water or water for injection for medical use. In the reverse osmosis process, the water from a liquid with a high concentration of dissolved solids is forced to flow through an especially engineered semi-permeable membrane to the low concentration side, resulting in high-quality grade water. However, these specially engineered RO membranes need to be sanitized either chemically or at high temperatures at regular intervals to keep the bio-burden at the minimum required level. In this paper, we talk about Dupont´s FilmTec Heat Sanitizable Reverse Osmosis membrane (HSRO) for the production of medical-grade water. An HSRO element must be pre-conditioned prior to initial use by exposure to hot water (80°C-85°C) for its stable performance and to meet the manufacturer’s specifications. Without pre-conditioning, the membrane will show variations in feed pressure operations and salt rejection. The paper will discuss the critical variables of pre-conditioning steps that can affect the overall performance of the HSRO membrane and demonstrate the data to support the need for pre-conditioning of HSRO elements. Our preliminary data suggests that there can be up to 35 % reduction in flow due to initial heat treatment, which also positively affects the increase in salt rejection. The paper will go into detail about the fundamental understanding of the performance change of HSRO after the pre-conditioning step and its effect on the quality of medical water produced. The paper will also discuss another critical point, “regular hot water sanitization” of these HSRO membranes. Regular hot water sanitization (at 80°C-85°C) is necessary to keep the membrane bioburden free; however, it can negatively impact the performance of the membrane over time. We will demonstrate several data points on hot water sanitization using FilmTec HSRO elements and challenge its robustness to produce quality medical water. The last part of this paper will discuss the construction details of the FilmTec HSRO membrane and features that make it suitable to pre-condition and sanitize at high temperatures.

Keywords: heat sanitizable reverse osmosis, HSRO, medical water, hemodialysis water, water for Injection, pre-conditioning, heat sanitization

Procedia PDF Downloads 211
38 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 270
37 Rabies Free Pakistan - Eliminating Rabies Through One Health Approach

Authors: Anzal Abbas Jaffari, Wajiha Javed, Naseem Salahuddin

Abstract:

Rationale: Rabies, a vaccine preventable disease, continues to be a critical public health issue as it kills around 2000-5000 people annually in Pakistan. Along with the disease spread among animals, the dog population remains a victim of brutal culling practices by the local authorities, which adversely affects ecosystem (sinking of poison in the soil – affecting vegetation & contaminating water) and the disease spread. The dog population has been exponentially rising primarily because a lack of a consolidated nationwide Animal Birth Control program and awareness among the local communities in general and children in particular. This is reflected in Pakistan’s low SARE score - 1.5, which makes the country trails behind other developing countries like Bangladesh (2.5) and Philippines (3.5).According to an estimate, the province of Sindh alone is home to almost 2.5 million dogs. The clustering of dogs in Peri-Urban areas and inner cities localities leads to an increase of reported dog bite cases in these areas specifically. Objective: Rabies Free Pakistan (RFP), which is a joint venture of Getz Pharma Private Limited and Indus Hospital & Health Network (IHHN); it was established in 2018 to eliminate Rabies from Pakistan by 2030 using the One Health Approach. Methodology: The RFP team is actively working on advocacy and policy front with both the Federal & Provincial government to ensure that all stakeholders currently involved in dog culling in Pakistan have a paradigm shift towards humane methods of vaccination and ABC. Along with the federal government, RFP aims to declare Rabies as a notifiable disease. Whereas RFP closely works with the provincial government of Sindh to initiate a province wide Rabies Control Program.RFP program follows international standards and WHO approved protocols for this program in Pakistan.RFP team has achieved various milestones in the fight against Rabies after successfully scaling up project operations and has vaccinated more than 30,000 dogs and neutered around 7,000 dogs since 2018. Recommendations: Effective implementation of Rabies program (MDV and ABC) requires a concentrated effort to address a variety of structural and policy challenges. This essentially demands a massive shift in the attitude of individuals towards rabies. The two most significant challenges in implementing a standard policy at the structural level are lack of institutional capacity, shortage of vaccine, and absence of inter-departmental coordination among major stakeholders: federal government, provincial ministry of health, livestock, and local bodies (including local councils). The lack of capacity in health care workers to treat dog bite cases emerges as a critical challenge at the clinical level. Conclusion: Pakistan can learn from the successful international models of Sri Lanka and Mexico as they adopted the One Health Approach to eliminate rabies like RFP. The WHO advised One Health approach provides the policymakers with an interactive and cross-sectoral guide, which involves all the essential elements of the eco system (including animals, humans, and other components).

Keywords: animal birth control, dog population, mass dog vaccination, one health, rabies elimination

Procedia PDF Downloads 180
36 Nanoscale Photo-Orientation of Azo-Dyes in Glassy Environments Using Polarized Optical Near-Field

Authors: S. S. Kharintsev, E. A. Chernykh, S. K. Saikin, A. I. Fishman, S. G. Kazarian

Abstract:

Recent advances in improving information storage performance are inseparably linked with circumvention of fundamental constraints such as the supermagnetic limit in heat assisted magnetic recording, charge loss tolerance in solid-state memory and the Abbe’s diffraction limit in optical storage. A substantial breakthrough in the development of nonvolatile storage devices with dimensional scaling has been achieved due to phase-change chalcogenide memory, which nowadays, meets the market needs to the greatest advantage. A further progress is aimed at the development of versatile nonvolatile high-speed memory combining potentials of random access memory and archive storage. The well-established properties of light at the nanoscale empower us to use them for recording optical information with ultrahigh density scaled down to a single molecule, which is the size of a pit. Indeed, diffraction-limited optics is able to record as much information as ~1 Gb/in2. Nonlinear optical effects, for example, two-photon fluorescence recording, allows one to decrease the extent of the pit even more, which results in the recording density up to ~100 Gb/in2. Going beyond the diffraction limit, due to the sub-wavelength confinement of light, pushes the pit size down to a single chromophore, which is, on average, of ~1 nm in length. Thus, the memory capacity can be increased up to the theoretical limit of 1 Pb/in2. Moreover, the field confinement provides faster recording and readout operations due to the enhanced light-matter interaction. This, in turn, leads to the miniaturization of optical devices and the decrease of energy supply down to ~1 μW/cm². Intrinsic features of light such as multimode, mixed polarization and angular momentum in addition to the underlying optical and holographic tools for writing/reading, enriches the storage and encryption of optical information. In particular, the finite extent of the near-field penetration, falling into a range of 50-100 nm, gives the possibility to perform 3D volume (layer-to-layer) recording/readout of optical information. In this study, we demonstrate a comprehensive evidence of isotropic-to-homeotropic phase transition of the azobenzene-functionalized polymer thin film exposed to light and dc electric field using near-field optical microscopy and scanning capacitance microscopy. We unravel a near-field Raman dichroism of a sub-10 nm thick epoxy-based side-chain azo-polymer films with polarization-controlled tip-enhanced Raman scattering. In our study, orientation of azo-chromophores is controlled with a bias voltage gold tip rather than light polarization. Isotropic in-plane and homeotropic out-of-plane arrangement of azo-chromophores in glassy environment can be distinguished with transverse and longitudinal optical near-fields. We demonstrate that both phases are unambiguously visualized by 2D mapping their local dielectric properties with scanning capacity microscopy. The stability of the polar homeotropic phase is strongly sensitive to the thickness of the thin film. We make an analysis of α-transition of the azo-polymer by detecting a temperature-dependent phase jump of an AFM cantilever when passing through the glass temperature. Overall, we anticipate further improvements in optical storage performance, which approaches to a single molecule level.

Keywords: optical memory, azo-dye, near-field, tip-enhanced Raman scattering

Procedia PDF Downloads 177
35 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility

Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda

Abstract:

Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1)    Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4)       Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.

Keywords: affordable, European, green, mobility, technologies development, travel time reduction

Procedia PDF Downloads 99
34 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 276
33 Exploring Antimicrobial Resistance in the Lung Microbial Community Using Unsupervised Machine Learning

Authors: Camilo Cerda Sarabia, Fernanda Bravo Cornejo, Diego Santibanez Oyarce, Hugo Osses Prado, Esteban Gómez Terán, Belén Diaz Diaz, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

Antimicrobial resistance (AMR) represents a significant and rapidly escalating global health threat. Projections estimate that by 2050, AMR infections could claim up to 10 million lives annually. Respiratory infections, in particular, pose a severe risk not only to individual patients but also to the broader public health system. Despite the alarming rise in resistant respiratory infections, AMR within the lung microbiome (microbial community) remains underexplored and poorly characterized. The lungs, as a complex and dynamic microbial environment, host diverse communities of microorganisms whose interactions and resistance mechanisms are not fully understood. Unlike studies that focus on individual genomes, analyzing the entire microbiome provides a comprehensive perspective on microbial interactions, resistance gene transfer, and community dynamics, which are crucial for understanding AMR. However, this holistic approach introduces significant computational challenges and exposes the limitations of traditional analytical methods such as the difficulty of identifying the AMR. Machine learning has emerged as a powerful tool to overcome these challenges, offering the ability to analyze complex genomic data and uncover novel insights into AMR that might be overlooked by conventional approaches. This study investigates microbial resistance within the lung microbiome using unsupervised machine learning approaches to uncover resistance patterns and potential clinical associations. it downloaded and selected lung microbiome data from HumanMetagenomeDB based on metadata characteristics such as relevant clinical information, patient demographics, environmental factors, and sample collection methods. The metadata was further complemented by details on antibiotic usage, disease status, and other relevant descriptions. The sequencing data underwent stringent quality control, followed by a functional profiling focus on identifying resistance genes through specialized databases like Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. Subsequent analyses employed unsupervised machine learning techniques to unravel the structure and diversity of resistomes in the microbial community. Some of the methods employed were clustering methods such as K-Means and Hierarchical Clustering enabled the identification of sample groups based on their resistance gene profiles. The work was implemented in python, leveraging a range of libraries such as biopython for biological sequence manipulation, NumPy for numerical operations, Scikit-learn for machine learning, Matplotlib for data visualization and Pandas for data manipulation. The findings from this study provide insights into the distribution and dynamics of antimicrobial resistance within the lung microbiome. By leveraging unsupervised machine learning, we identified novel resistance patterns and potential drivers within the microbial community.

Keywords: antibiotic resistance, microbial community, unsupervised machine learning., sequences of AMR gene

Procedia PDF Downloads 23
32 Integrated Services Hub for Exploration and Production Industry: An Indian Narrative

Authors: Sunil Arora, Anitya Kumar Jena, S. A. Ravi

Abstract:

India is at the cusp of major reforms in the hydrocarbon sector. Oil and gas sector is highly liberalised to attract private investment and to increase domestic production. Major hydrocarbon Exploration & Production (E&P) activity here have been undertaken by Government owned companies but with easing up and reworking of hydro carbon exploration licensing policies private players have also joined the fray towards achieving energy security for India. Government of India has come up with policy and administrative reforms including Hydrocarbon Exploration and Licensing Policy (HELP), Sagarmala (port-led development with coastal connectivity), and Development of Small Discovered Fields, etc. with the intention to make industry friendly conditions for investment, ease of doing business and reduce gestation period. To harness the potential resources of Deep water and Ultra deep water, High Pressure – High Temperature (HP-HT) regions, Coal Bed Methane (CBM), Shale Hydrocarbons besides Gas Hydrates, participation shall be required from both domestic and international players. Companies engaged in E&P activities in India have traditionally been managing through their captive supply base, but with crude prices under hammer, the need is being felt to outsource non-core activities. This necessitates establishment of a robust support services to cater to E&P Industry, which is currently non-existent to meet the bourgeon challenges. This paper outlines an agenda for creating an Integrated Services Hub (ISH) under Special Economic Zone (SEZ) to facilitate complete gamut of non-core support activities of E&P industry. This responsive and proficient multi-usage facility becomes viable with better resource utilization, economies of scale to offer cost effective services. The concept envisages companies to bring-in their core technical expertise leaving complete hardware peripherals outsourced to this ISH. The Integrated Services Hub, complying with the best in class global standards, shall typically provide following Services under Single Window Solution, but not limited to: a) Logistics including supply base operations, transport of manpower and material, helicopters, offshore supply vessels, warehousing, inventory management, sourcing and procurement activities, international freight forwarding, domestic trucking, customs clearance service etc. b) Trained/Experienced pool of competent Manpower (Technical, Security etc.) will be available for engagement by companies on either short or long term basis depending upon the requirements with provisions of meeting any training requirements. c) Specialized Services through tie-up with global best companies for Crisis Management, Mud/Cement, Fishing, Floating Dry-dock besides provision of Workshop, Repair and Testing facilities, etc. d) Tools and Tackles including drill strings, etc. A pre-established Integrated Services Hub shall facilitate an early start-up of activities with substantial savings in time lines. This model can be replicated at other parts of the world to expedite E&P activities.

Keywords: integrated service hub, India, oil gas, offshore supply base

Procedia PDF Downloads 150
31 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 70
30 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia

Authors: Andrew D. Henshaw

Abstract:

The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.

Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism

Procedia PDF Downloads 313