Search results for: relative flow depth
73 International Coffee Trade in Solidarity with the Zapatista Rebellion: Anthropological Perspectives on Commercial Ethics within Political Antagonistic Movements
Authors: Miria Gambardella
Abstract:
The influence of solidarity demonstrations towards the Zapatista National Liberation Army has been constantly present over the years, both locally and internationally, guaranteeing visibility to the cause, shaping the movement’s choices, and influencing its hopes of impact worldwide. Most of the coffee produced by the autonomous cooperatives from Chiapas is exported, therefore making coffee trade the main income from international solidarity networks. The question arises about the implications of the relations established between the communities in resistance in Southeastern Mexico and international solidarity movements, specifically on the strategies adopted to conciliate army's demands for autonomy and economic asymmetries between Zapatista cooperatives producing coffee and European collectives who hold purchasing power. In order to deepen the inquiry on those topics, a year-long multi-site investigation was carried out. The first six months of fieldwork were based in Barcelona, where Zapatista coffee was first traded in Spain and where one of the historical and most important European solidarity groups can be found. The last six months of fieldwork were carried out directly in Chiapas, in contact with coffee producers, Zapatista political authorities, international activists as well as vendors, and the rest of the network implicated in coffee production, roasting, and sale. The investigation was based on qualitative research methods, including participatory observation, focus groups, and semi-structured interviews. The analysis did not only focus on retracing the steps of the market chain as if it could be considered a linear and unilateral process, but it rather aimed at exploring actors’ reciprocal perceptions, roles, and dynamics of power. Demonstrations of solidarity and the money circulation they imply aim at changing the system in place and building alternatives, among other things, on the economic level. This work analyzes the formulation of discourse and the organization of solidarity activities that aim at building opportunities for action within a highly politicized economic sphere to which access must be regularly legitimized. The meaning conveyed by coffee is constructed on a symbolic level by the attribution of moral criteria to transactions. The latter participate in the construction of imaginaries that circulate through solidarity movements with the Zapatista rebellion. Commercial exchanges linked to solidarity networks turned out to represent much more than monetary transactions. The social, cultural, and political spheres are invested by ethics, which penetrates all aspects of militant action. It is at this level that the boundaries of different collective actors connect, contaminating each other: merely following the money flow would have been limiting in order to account for a reality within which imaginary is one of the main currencies. The notions of “trust”, “dignity” and “reciprocity” are repeatedly mobilized to negotiate discontinuous and multidirectional flows in the attempt to balance and justify commercial relations in a politicized context that characterizes its own identity through demonizing “market economy” and its dehumanizing powers.Keywords: coffee trade, economic anthropology, international cooperation, Zapatista National Liberation Army
Procedia PDF Downloads 8672 Knowledge of the Doctors Regarding International Patient Safety Goal
Authors: Fatima Saeed, Abdullah Mudassar
Abstract:
Introduction: Patient safety remains a global priority in the ever-evolving healthcare landscape. At the forefront of this endeavor are the International Patient Safety Goals (IPSGs), a standardized framework designed to mitigate risks and elevate the quality of care. Doctors, positioned as primary caregivers, wield a pivotal role in upholding and adhering to IPSGs, underscoring the critical significance of their knowledge and understanding of these goals. This research embarks on a comprehensive exploration into the depth of Doctors ' comprehension of IPSGs, aiming to unearth potential gaps and provide insights for targeted educational interventions. Established by influential healthcare bodies, including the World Health Organization (WHO), IPSGs represent a universally applicable set of objectives spanning crucial domains such as medication safety, infection control, surgical site safety, and patient identification. Adherence to these goals has exhibited substantial reductions in adverse events, fostering an overall enhancement in the quality of care. This study operates on the fundamental premise that an informed Doctors workforce is indispensable for effectively implementing IPSGs. A nuanced understanding of these goals empowers Doctors to identify potential risks, advocate for necessary changes, and actively contribute to a safety-centric culture within healthcare institutions. Despite the acknowledged importance of IPSGs, there is a growing concern that nurses may need more knowledge to integrate these goals into their practice seamlessly. Methodology: A Comprehensive research methodology covering study design, setting, duration, sample size determination, sampling technique, and data analysis. It introduces the philosophical framework guiding the research and details material, methods, and the analysis framework. The descriptive quantitative cross-sectional study in teaching care hospitals utilized convenient sampling over six months. Data collection involved written informed consent and questionnaires, analyzed with SPSS version 23, presenting results graphically and descriptively. The chapter ensures a clear understanding of the study's design, execution, and analytical processes. Result: The survey results reveal a substantial distribution across hospitals, with 34.52% in MTIKTH and 65.48% in HMC MTI. There is a notable prevalence of patient safety incidents, emphasizing the significance of adherence to IPSGs. Positive trends are observed, including 77.0% affirming the "time-out" procedure, 81.6% acknowledging effective healthcare provider communication, and high recognition (82.7%) of the purpose of IPSGs to improve patient safety. While the survey reflects a good understanding of IPSGs, areas for improvement are identified, suggesting opportunities for targeted interventions. Discussion: The study underscores the need for tailored care approaches and highlights the bio-socio-cultural context of 'contagion,' suggesting areas for further research amid antimicrobial resistance. Shifting the focus to patient safety practices, the survey chapter provides a detailed overview of results, emphasizing workplace distribution, patient safety incidents, and positive reflections on IPSGs. The findings indicate a positive trend in patient safety practices with areas for improvement, emphasizing the ongoing need for reinforcing safety protocols and cultivating a safety-centric culture in healthcare. Conclusion: In summary, the survey indicates a positive trend in patient safety practices with a good understanding of IPSGs among participants. However, identifying areas for potential improvement suggests opportunities for targeted interventions to enhance patient safety further. Ongoing efforts to reinforce adherence to safety protocols, address identified gaps, and foster a safety culture will contribute to continuous improvements in patient care and outcomes.Keywords: infection control, international patient safety, patient safety practices, proper medication
Procedia PDF Downloads 5371 Implementation of Smart Card Automatic Fare Collection Technology in Small Transit Agencies for Standards Development
Authors: Walter E. Allen, Robert D. Murray
Abstract:
Many large transit agencies have adopted RFID technology and electronic automatic fare collection (AFC) or smart card systems, but small and rural agencies remain tied to obsolete manual, cash-based fare collection. Small countries or transit agencies can benefit from the implementation of smart card AFC technology with the promise of increased passenger convenience, added passenger satisfaction and improved agency efficiency. For transit agencies, it reduces revenue loss, improves passenger flow and bus stop data. For countries, further implementation into security, distribution of social services or currency transactions can provide greater benefits. However, small countries or transit agencies cannot afford expensive proprietary smart card solutions typically offered by the major system suppliers. Deployment of Contactless Fare Media System (CFMS) Standard eliminates the proprietary solution, ultimately lowering the cost of implementation. Acumen Building Enterprise, Inc. chose the Yuma County Intergovernmental Public Transportation Authority (YCIPTA) existing proprietary YCAT smart card system to implement CFMS. The revised system enables the purchase of fare product online with prepaid debit or credit cards using the Payment Gateway Processor. Open and interoperable smart card standards for transit have been developed. During the 90-day Pilot Operation conducted, the transit agency gathered the data from the bus AcuFare 200 Card Reader, loads (copies) the data to a USB Thumb Drive and uploads the data to the Acumen Host Processing Center for consolidation of the data into the transit agency master data file. The transition from the existing proprietary smart card data format to the new CFMS smart card data format was transparent to the transit agency cardholders. It was proven that open standards and interoperability design can work and reduce both implementation and operational costs for small transit agencies or countries looking to expand smart card technology. Acumen was able to avoid the implementation of the Payment Card Industry (PCI) Data Security Standards (DSS) which is expensive to develop and costly to operate on a continuing basis. Due to the substantial additional complexities of implementation and the variety of options presented to the transit agency cardholder, Acumen chose to implement only the Directed Autoload. To improve the implementation efficiency and the results for a similar undertaking, it should be considered that some passengers lack credit cards and are averse to technology. There are more than 1,300 small and rural agencies in the United States. This grows by 10 fold when considering small countries or rural locations throughout Latin American and the world. Acumen is evaluating additional countries, sites or transit agency that can benefit from the smart card systems. Frequently, payment card systems require extensive security procedures for implementation. The Project demonstrated the ability to purchase fare value, rides and passes with credit cards on the internet at a reasonable cost without highly complex security requirements.Keywords: automatic fare collection, near field communication, small transit agencies, smart cards
Procedia PDF Downloads 28270 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 26169 Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: transformers, generative ai, gene expression design, classification
Procedia PDF Downloads 5968 Simulation-based Decision Making on Intra-hospital Patient Referral in a Collaborative Medical Alliance
Authors: Yuguang Gao, Mingtao Deng
Abstract:
The integration of independently operating hospitals into a unified healthcare service system has become a strategic imperative in the pursuit of hospitals’ high-quality development. Central to the concept of group governance over such transformation, exemplified by a collaborative medical alliance, is the delineation of shared value, vision, and goals. Given the inherent disparity in capabilities among hospitals within the alliance, particularly in the treatment of different diseases characterized by Disease Related Groups (DRG) in terms of effectiveness, efficiency and resource utilization, this study aims to address the centralized decision-making of intra-hospital patient referral within the medical alliance to enhance the overall production and quality of service provided. We first introduce the notion of production utility, where a higher production utility for a hospital implies better performance in treating patients diagnosed with that specific DRG group of diseases. Then, a Discrete-Event Simulation (DES) framework is established for patient referral among hospitals, where patient flow modeling incorporates a queueing system with fixed capacities for each hospital. The simulation study begins with a two-member alliance. The pivotal strategy examined is a "whether-to-refer" decision triggered when the bed usage rate surpasses a predefined threshold for either hospital. Then, the decision encompasses referring patients to the other hospital based on DRG groups’ production utility differentials as well as bed availability. The objective is to maximize the total production utility of the alliance while minimizing patients’ average length of stay and turnover rate. Thus the parameter under scrutiny is the bed usage rate threshold, influencing the efficacy of the referral strategy. Extending the study to a three-member alliance, which could readily be generalized to multi-member alliances, we maintain the core setup while introducing an additional “which-to-refer" decision that involves referring patients with specific DRG groups to the member hospital according to their respective production utility rankings. The overarching goal remains consistent, for which the bed usage rate threshold is once again a focal point for analysis. For the two-member alliance scenario, our simulation results indicate that the optimal bed usage rate threshold hinges on the discrepancy in the number of beds between member hospitals, the distribution of DRG groups among incoming patients, and variations in production utilities across hospitals. Transitioning to the three-member alliance, we observe similar dependencies on these parameters. Additionally, it becomes evident that an imbalanced distribution of DRG diagnoses and further disparity in production utilities among member hospitals may lead to an increase in the turnover rate. In general, it was found that the intra-hospital referral mechanism enhances the overall production utility of the medical alliance compared to individual hospitals without partnership. Patients’ average length of stay is also reduced, showcasing the positive impact of the collaborative approach. However, the turnover rate exhibits variability based on parameter setups, particularly when patients are redirected within the alliance. In conclusion, the re-structuring of diagnostic disease groups within the medical alliance proves instrumental in improving overall healthcare service outcomes, providing a compelling rationale for the government's promotion of patient referrals within collaborative medical alliances.Keywords: collaborative medical alliance, disease related group, patient referral, simulation
Procedia PDF Downloads 5767 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 8866 Rheological and Microstructural Characterization of Concentrated Emulsions Prepared by Fish Gelatin
Authors: Helen S. Joyner (Melito), Mohammad Anvari
Abstract:
Concentrated emulsions stabilized by proteins are systems of great importance in food, pharmaceutical and cosmetic products. Controlling emulsion rheology is critical for ensuring desired properties during formation, storage, and consumption of emulsion-based products. Studies on concentrated emulsions have focused on rheology of monodispersed systems. However, emulsions used for industrial applications are polydispersed in nature, and this polydispersity is regarded as an important parameter that also governs the rheology of the concentrated emulsions. Therefore, the objective of this study was to characterize rheological (small and large deformation behaviors) and microstructural properties of concentrated emulsions which were not truly monodispersed as usually encountered in food products such as margarines, mayonnaise, creams, spreads, and etc. The concentrated emulsions were prepared at different concentrations of fish gelatin (0.2, 0.4, 0.8% w/v in the whole emulsion system), oil-water ratio 80-20 (w/w), homogenization speed 10000 rpm, and 25oC. Confocal laser scanning microscopy (CLSM) was used to determine the microstructure of the emulsions. To prepare samples for CLSM analysis, FG solutions were stained by Fluorescein isothiocyanate dye. Emulsion viscosity profiles were determined using shear rate sweeps (0.01 to 100 1/s). The linear viscoelastic regions (LVRs) of the emulsions were determined using strain sweeps (0.01 to 100% strain) for each sample. Frequency sweeps were performed in the LVR (0.1% strain) from 0.6 to 100 rad/s. Large amplitude oscillatory shear (LAOS) testing was conducted by collecting raw waveform data at 0.05, 1, 10, and 100% strain at 4 different frequencies (0.5, 1, 10, and 100 rad/s). All measurements were performed in triplicate at 25oC. The CLSM results revealed that increased fish gelatin concentration resulted in more stable oil-in-water emulsions with homogeneous, finely dispersed oil droplets. Furthermore, the protein concentration had a significant effect on emulsion rheological properties. Apparent viscosity and dynamic moduli at small deformations increased with increasing fish gelatin concentration. These results were related to increased inter-droplet network connections caused by increased fish gelatin adsorption at the surface of oil droplets. Nevertheless, all samples showed shear-thinning and weak gel behaviors over shear rate and frequency sweeps, respectively. Lissajous plots, or plots of stress versus strain, and phase lag values were used to determine nonlinear behavior of the emulsions in LAOS testing. Greater distortion in the elliptical shape of the plots followed by higher phase lag values was observed at large strains and frequencies in all samples, indicating increased nonlinear behavior. Shifts from elastic-dominated to viscous dominated behavior were also observed. These shifts were attributed to damage to the sample microstructure (e.g. gel network disruption), which would lead to viscous-type behaviors such as permanent deformation and flow. Unlike the small deformation results, the LAOS behavior of the concentrated emulsions was not dependent on fish gelatin concentration. Systems with different microstructures showed similar nonlinear viscoelastic behaviors. The results of this study provided valuable information that can be used to incorporate concentrated emulsions in emulsion-based food formulations.Keywords: concentrated emulsion, fish gelatin, microstructure, rheology
Procedia PDF Downloads 27265 Blue Economy and Marine Mining
Authors: Fani Sakellariadou
Abstract:
The Blue Economy includes all marine-based and marine-related activities. They correspond to established, emerging as well as unborn ocean-based industries. Seabed mining is an emerging marine-based activity; its operations depend particularly on cutting-edge science and technology. The 21st century will face a crisis in resources as a consequence of the world’s population growth and the rising standard of living. The natural capital stored in the global ocean is decisive for it to provide a wide range of sustainable ecosystem services. Seabed mineral deposits were identified as having a high potential for critical elements and base metals. They have a crucial role in the fast evolution of green technologies. The major categories of marine mineral deposits are deep-sea deposits, including cobalt-rich ferromanganese crusts, polymetallic nodules, phosphorites, and deep-sea muds, as well as shallow-water deposits including marine placers. Seabed mining operations may take place within continental shelf areas of nation-states. In international waters, the International Seabed Authority (ISA) has entered into 15-year contracts for deep-seabed exploration with 21 contractors. These contracts are for polymetallic nodules (18 contracts), polymetallic sulfides (7 contracts), and cobalt-rich ferromanganese crusts (5 contracts). Exploration areas are located in the Clarion-Clipperton Zone, the Indian Ocean, the Mid Atlantic Ridge, the South Atlantic Ocean, and the Pacific Ocean. Potential environmental impacts of deep-sea mining include habitat alteration, sediment disturbance, plume discharge, toxic compounds release, light and noise generation, and air emissions. They could cause burial and smothering of benthic species, health problems for marine species, biodiversity loss, reduced photosynthetic mechanism, behavior change and masking acoustic communication for mammals and fish, heavy metals bioaccumulation up the food web, decrease of the content of dissolved oxygen, and climate change. An important concern related to deep-sea mining is our knowledge gap regarding deep-sea bio-communities. The ecological consequences that will be caused in the remote, unique, fragile, and little-understood deep-sea ecosystems and inhabitants are still largely unknown. The blue economy conceptualizes oceans as developing spaces supplying socio-economic benefits for current and future generations but also protecting, supporting, and restoring biodiversity and ecological productivity. In that sense, people should apply holistic management and make an assessment of marine mining impacts on ecosystem services, including the categories of provisioning, regulating, supporting, and cultural services. The variety in environmental parameters, the range in sea depth, the diversity in the characteristics of marine species, and the possible proximity to other existing maritime industries cause a span of marine mining impact the ability of ecosystems to support people and nature. In conclusion, the use of the untapped potential of the global ocean demands a liable and sustainable attitude. Moreover, there is a need to change our lifestyle and move beyond the philosophy of single-use. Living in a throw-away society based on a linear approach to resource consumption, humans are putting too much pressure on the natural environment. Applying modern, sustainable and eco-friendly approaches according to the principle of circular economy, a substantial amount of natural resource savings will be achieved. Acknowledgement: This work is part of the MAREE project, financially supported by the Division VI of IUPAC. This work has been partly supported by the University of Piraeus Research Center.Keywords: blue economy, deep-sea mining, ecosystem services, environmental impacts
Procedia PDF Downloads 8264 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 15063 Spatial Assessment of Creek Habitats of Marine Fish Stock in Sindh Province
Authors: Syed Jamil H. Kazmi, Faiza Sarwar
Abstract:
The Indus delta of Sindh Province forms the largest creeks zone of Pakistan. The Sindh coast starts from the mouth of Hab River and terminates at Sir Creek area. In this paper, we have considered the major creeks from the site of Bin Qasim Port in Karachi to Jetty of Keti Bunder in Thatta District. A general decline in the mangrove forest has been observed that within a span of last 25 years. The unprecedented human interventions damage the creeks habitat badly which includes haphazard urban development, industrial and sewage disposal, illegal cutting of mangroves forest, reduced and inconsistent fresh water flow mainly from Jhang and Indus rivers. These activities not only harm the creeks habitat but affected the fish stock substantially. Fishing is the main livelihood of coastal people but with the above-mentioned threats, it is also under enormous pressure by fish catches resulted in unchecked overutilization of the fish resources. This pressure is almost unbearable when it joins with deleterious fishing methods, uncontrolled fleet size, increase trash and by-catch of juvenile and illegal mesh size. Along with these anthropogenic interventions study area is under the red zone of tropical cyclones and active seismicity causing floods, sea intrusion, damage mangroves forests and devastation of fish stock. In order to sustain the natural resources of the Indus Creeks, this study was initiated with the support of FAO, WWF and NIO, the main purpose was to develop a Geo-Spatial dataset for fish stock assessment. The study has been spread over a year (2013-14) on monthly basis which mainly includes detailed fish stock survey, water analysis and few other environmental analyses. Environmental analysis also includes the habitat classification of study area which has done through remote sensing techniques for 22 years’ time series (1992-2014). Furthermore, out of 252 species collected, fifteen species from estuarine and marine groups were short-listed to measure the weight, health and growth of fish species at each creek under GIS data through SPSS system. Furthermore, habitat suitability analysis has been conducted by assessing the surface topographic and aspect derivation through different GIS techniques. The output variables then overlaid in GIS system to measure the creeks productivity. Which provided the results in terms of subsequent classes: extremely productive, highly productive, productive, moderately productive and less productive. This study has revealed the Geospatial tools utilization along with the evaluation of the fisheries resources and creeks habitat risk zone mapping. It has also been identified that the geo-spatial technologies are highly beneficial to identify the areas of high environmental risk in Sindh Creeks. This has been clearly discovered from this study that creeks with high rugosity are more productive than the creeks with low levels of rugosity. The study area has the immense potential to boost the economy of Pakistan in terms of fish export, if geo-spatial techniques are implemented instead of conventional techniques.Keywords: fish stock, geo-spatial, productivity analysis, risk
Procedia PDF Downloads 24362 Signature Bridge Design for the Port of Montreal
Authors: Juan Manuel Macia
Abstract:
The Montreal Port Authority (MPA) wanted to build a new road link via Souligny Avenue to increase the fluidity of goods transported by truck in the Viau Street area of Montreal and to mitigate the current traffic problems on Notre-Dame Street. With the purpose of having a better integration and acceptance of this project with the neighboring residential surroundings, this project needed to include an architectural integration, bringing some artistic components to the bridge design along with some landscaping components. The MPA is required primarily to provide direct truck access to Port of Montreal with a direct connection to the future Assomption Boulevard planned by the City of Montreal and, thus, direct access to Souligny Avenue. The MPA also required other key aspects to be considered for the proposal and development of the project, such as the layout of road and rail configurations, the reconstruction of underground structures, the relocation of power lines, the installation of lighting systems, the traffic signage and communication systems improvement, the construction of new access ramps, the pavement reconstruction and a summary assessment of the structural capacity of an existing service tunnel. The identification of the various possible scenarios began by identifying all the constraints related to the numerous infrastructures located in the area of the future link between the port and the future extension of Souligny Avenue, involving interaction with several disciplines and technical specialties. Several viaduct- and tunnel-type geometries were studied to link the port road to the right-of-way north of Notre-Dame Street and to improve traffic flow at the railway corridor. The proposed design took into account the existing access points to Port of Montreal, the built environment of the MPA site, the provincial and municipal rights-of-way, and the future Notre-Dame Street layout planned by the City of Montreal. These considerations required the installation of an engineering structure with a span of over 60 m to free up a corridor for the future urban fabric of Notre-Dame Street. The best option for crossing this span length was identified by the design and construction of a curved bridge over Notre-Dame Street, which is essentially a structure with a deck formed by a reinforced concrete slab on steel box girders with a single span of 63.5m. The foundation units were defined as pier-cap type abutments on drilled shafts to bedrock with rock sockets, with MSE-type walls at the approaches. The configuration of a single-span curved structure posed significant design and construction challenges, considering the major constraints of the project site, a design for durability approach, and the need to guarantee optimum performance over a 75-year service life in accordance with the client's needs and the recommendations and requirements defined by the standards used for the project. These aspects and the need to include architectural and artistic components in this project made it possible to design, build, and integrate a signature infrastructure project with a sustainable approach, from which the MPA, the commuters, and the city of Montreal and its residents will benefit.Keywords: curved bridge, steel box girder, medium span, simply supported, industrial and urban environment, architectural integration, design for durability
Procedia PDF Downloads 6461 A Rapid and Greener Analysis Approach Based on Carbonfiber Column System and MS Detection for Urine Metabolomic Study After Oral Administration of Food Supplements
Authors: Zakia Fatima, Liu Lu, Donghao Li
Abstract:
The analysis of biological fluid metabolites holds significant importance in various areas, such as medical research, food science, and public health. Investigating the levels and distribution of nutrients and their metabolites in biological samples allows researchers and healthcare professionals to determine nutritional status, find hypovitaminosis or hypervitaminosis, and monitor the effectiveness of interventions such as dietary supplementation. Moreover, analysis of nutrient metabolites provides insight into their metabolism, bioavailability, and physiological processes, aiding in the clarification of their health roles. Hence, the exploration of a distinct, efficient, eco-friendly, and simpler methodology is of great importance to evaluate the metabolic content of complex biological samples. In this work, a green and rapid analytical method based on an automated online two-dimensional microscale carbon fiber/activated carbon fiber fractionation system and time-of-flight mass spectrometry (2DμCFs-TOF-MS) was used to evaluate metabolites of urine samples after oral administration of food supplements. The automated 2DμCFs instrument consisted of a microcolumn system with bare carbon fibers and modified carbon fiber coatings. Carbon fibers and modified carbon fibers exhibit different surface characteristics and retain different compounds accordingly. Three kinds of mobile-phase solvents were used to elute the compounds of varied chemical heterogeneities. The 2DμCFs separation system has the ability to effectively separate different compounds based on their polarity and solubility characteristics. No complicated sample preparation method was used prior to analysis, which makes the strategy more eco-friendly, practical, and faster than traditional analysis methods. For optimum analysis results, mobile phase composition, flow rate, and sample diluent were optimized. Water-soluble vitamins, fat-soluble vitamins, and amino acids, as well as 22 vitamin metabolites and 11 vitamin metabolic pathway-related metabolites, were found in urine samples. All water-soluble vitamins except vitamin B12 and vitamin B9 were detected in urine samples. However, no fat-soluble vitamin was detected, and only one metabolite of Vitamin A was found. The comparison with a blank urine sample showed a considerable difference in metabolite content. For example, vitamin metabolites and three related metabolites were not detected in blank urine. The complete single-run screening was carried out in 5.5 minutes with the minimum consumption of toxic organic solvent (0.5 ml). The analytical method was evaluated in terms of greenness, with an analytical greenness (AGREE) score of 0.72. The method’s practicality has been investigated using the Blue Applicability Grade Index (BAGI) tool, obtaining a score of 77. The findings in this work illustrated that the 2DµCFs-TOF-MS approach could emerge as a fast, sustainable, practical, high-throughput, and promising analytical tool for screening and accurate detection of various metabolites, pharmaceuticals, and ingredients in dietary supplements as well as biological fluids.Keywords: metabolite analysis, sustainability, carbon fibers, urine.
Procedia PDF Downloads 2560 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers
Procedia PDF Downloads 5959 Design of Experiment for Optimizing Immunoassay Microarray Printing
Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern
Abstract:
Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing
Procedia PDF Downloads 18158 Bio-Inspired Information Complexity Management: From Ant Colony to Construction Firm
Authors: Hamza Saeed, Khurram Iqbal Ahmad Khan
Abstract:
Effective information management is crucial for any construction project and its success. Primary areas of information generation are either the construction site or the design office. There are different types of information required at different stages of construction involving various stakeholders creating complexity. There is a need for effective management of information flows to reduce uncertainty creating complexity. Nature provides a unique perspective in terms of dealing with complexity, in particular, information complexity. System dynamics methodology provides tools and techniques to address complexity. It involves modeling and simulation techniques that help address complexity. Nature has been dealing with complex systems since its creation 4.5 billion years ago. It has perfected its system by evolution, resilience towards sudden changes, and extinction of unadaptable and outdated species that are no longer fit for the environment. Nature has been accommodating the changing factors and handling complexity forever. Humans have started to look at their natural counterparts for inspiration and solutions for their problems. This brings forth the possibility of using a biomimetics approach to improve the management practices used in the construction sector. Ants inhabit different habitats. Cataglyphis and Pogonomyrmex live in deserts, Leafcutter ants reside in rainforests, and Pharaoh ants are native to urban developments of tropical areas. Detailed studies have been done on fifty species out of fourteen thousand discovered. They provide the opportunity to study the interactions in diverse environments to generate collective behavior. Animals evolve to better adapt to their environment. The collective behavior of ants emerges from feedback through interactions among individuals, based on a combination of three basic factors: The patchiness of resources in time and space, operating cost, environmental stability, and the threat of rupture. If resources appear in patches through time and space, the response is accelerating and non-linear, and if resources are scattered, the response follows a linear pattern. If the acquisition of energy through food is faster than energy spent to get it, the default is to continue with an activity unless it is halted for some reason. If the energy spent is rather higher than getting it, the default changes to stay put unless activated. Finally, if the environment is stable and the threat of rupture is low, the activation and amplification rate is slow but steady. Otherwise, it is fast and sporadic. To further study the effects and to eliminate the environmental bias, the behavior of four different ant species were studied, namely Red Harvester ants (Pogonomyrmex Barbatus), Argentine ants (Linepithema Humile), Turtle ants (Cephalotes Goniodontus), Leafcutter ants (Genus: Atta). This study aims to improve the information system in the construction sector by providing a guideline inspired by nature with a systems-thinking approach, using system dynamics as a tool. Identified factors and their interdependencies were analyzed in the form of a causal loop diagram (CLD), and construction industry professionals were interviewed based on the developed CLD, which was validated with significance response. These factors and interdependencies in the natural system corresponds with the man-made systems, providing a guideline for effective use and flow of information.Keywords: biomimetics, complex systems, construction management, information management, system dynamics
Procedia PDF Downloads 13657 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach
Authors: Rupesh S. Gundewar, Kanchan C. Khare
Abstract:
Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.Keywords: runoff, urbanization, impermeable response, flooding
Procedia PDF Downloads 24856 Modern Day Second Generation Military Filipino Amerasians and Ghosts of the U.S. Military Prostitution System in West Central Luzon's 'AMO Amerasian Triangle'
Authors: P. C. Kutschera, Elena C. Tesoro, Mary Grace Talamera-Sandico, Jose Maria G. Pelayo III
Abstract:
Second generation military Filipino Amerasians comprise a formidable contemporary segment of the estimated 250,000-plus biracial Amerasians in the Philippines today. Overall, they are a stigmatized and socioeconomically marginalized diaspora, historically; they were abandoned or estranged by U.S. military personnel fathers assigned during the century-long Colonial, Post-World War II and Cold War Era of permanent military basing (1898-1992). Indeed, U.S. military personnel remain stationed in smaller numbers in the Philippines today. This inquiry is an outgrowth of two recent small sample studies. The first surfaced the impact of the U.S. military prostitution system on formation of the ‘Derivative Amerasian Family Construct’ on first generation Amerasians; a second, qualitative case study suggested the continued effect of the prostitution systems' destructive impetuous on second generation Amerasians. The intent of this current qualitative, multiple-case study was to actively seek out second generation sex industry toilers. The purpose was to focus further on this human phenomenon in the post-basing and post-military prostitution system eras. As background, the former military prostitution apparatus has transformed into a modern dynamic of rampant sex tourism and prostitution nationwide. This is characterized by hotel and resorts offering unrestricted carnal access, urban and provincial brothels (casas), discos, bars and pickup clubs, massage parlors, local barrio karaoke bars and street prostitution. A small case study sample (N = 4) of female and male second generation Amerasians were selected. Sample formation employed a non-probability ‘snowball’ technique drawing respondents from the notorious Angeles, Metro Manila, Olongapo City ‘AMO Amerasian Triangle’ where most former U.S. military installations were sited and modern sex tourism thrives. A six-month study and analysis of in-depth interviews of female and male sex laborers, their families and peers revealed a litany of disturbing, and troublesome experiences. Results showed profiles of debilitating human poverty, history of family disorganization, stigmatization, social marginalization and the ghost of the military prostitution system and its harmful legacy on Amerasian family units. Emerging were testimonials of wayward young people ensnared in a maelstrom of deep economic deprivation, familial dysfunction, psychological desperation and societal indifference. The paper recommends that more study is needed and implications of unstudied psychosocial and socioeconomic experiences of distressed younger generations of military Amerasians require specific research. Heretofore apathetic or disengaged U.S. institutions need to confront the issue and formulate activist and solution-oriented social welfare, human services and immigration easement policies and alternatives. These institutions specifically include academic and social science research agencies, corporate foundations, the U.S. Congress, and Departments of State, Defense and Health and Human Services, and Homeland Security (i.e. Citizen and Immigration Services) It is them who continue to endorse a laissez-faire policy of non-involvement over the entire Filipino Amerasian question. Such apathy, the paper concludes, relegates this consequential but neglected blood progeny to the status of humiliating destitution and exploitation. Amerasians; thus, remain entrapped in their former colonial, and neo-colonial habitat. Ironically, they are unwitting victims of a U.S. American homeland that fancies itself geo-politically as a strong and strategic military treaty ally of the Philippines in the Western Pacific.Keywords: Asian Americans, diaspora, Filipino Amerasians, military prostitution, stigmatization
Procedia PDF Downloads 48655 Anesthesia for Spinal Stabilization Using Neuromuscular Blocking Agents in Dog: Case Report
Authors: Agata Migdalska, Joanna Berczynska, Ewa Bieniek, Jacek Sterna
Abstract:
Muscle relaxation is considered important during general anesthesia for spine stabilization. In a presented case peripherally acting muscle relaxant was applied during general anesthesia for spine stabilization surgery. The patient was a dog, 11-years old, 26 kg, male, mix breed. Spine fracture was situated between Th13-L1-L2, probably due to the car accident. Preanesthetic physical examination revealed no sign underlying health issues. The dog was premedicated with midazolam 0.2 mg IM and butorphanol 2.4 mg IM. General anesthesia was induced with propofol IV. After the induction, the dog was intubated with an endotracheal tube and connected to an open-ended rebreathing system and maintained with the use of inhalation anesthesia with isoflurane in oxygen. 0,5 mg/ kg of rocuronium was given IV. Use of muscle relaxant was accompanied by an assessment of the degree of neuromuscular blockade by peripheral nerve stimulator. Electrodes were attached to the skin overlying at the peroneal nerve at the lateral cranial tibia. Four electrical pulses were applied to the nerve over a 2 second period. When satisfying nerve block was detected dog was prepared for the surgery. No further monitoring of the effectiveness of blockade was performed during surgery. Mechanical ventilation was kept during anesthesia. During surgery dog maintain stable, and no anesthesiological complication occur. Intraoperatively surgeon claimed that neuromuscular blockade results in a better approach to the spine and easier muscle manipulation which was helpful in order to see the fracture and replace bone fragments. Finally, euthanasia was performed intraoperatively as a result of vast myelomalacia process of the spinal cord. This prevented examination of the recovering process. Neuromuscular blocking agents act at the neuromuscular junction to provide profound muscle relaxation throughout the body. Muscle blocking agents are neither anesthetic nor analgesic; therefore inappropriately used may cause paralysis in fully conscious and feeling pain patient. They cause paralysis of all skeletal muscles, also diaphragm and intercostal muscles when given in higher doses. Intraoperative management includes maintaining stable physiological conditions, which involves adjusting hemodynamic parameters, ensuring proper ventilation, avoiding variations in temperature, maintain normal blood flow to promote proper oxygen exchange. Neuromuscular blocking agent can cause many side effects like residual paralysis, anaphylactic or anaphylactoid reactions, delayed recovery from anesthesia, histamine release, recurarization. Therefore reverse drug like neostigmine (with glikopyrolat) or edrofonium (with atropine) should be used in case of a life-threatening situation. Another useful drug is sugammadex, although the cost of this drug strongly limits its use. Muscle relaxant improves surgical conditions during spinal surgery, especially in heavily muscled individuals. They are also used to facilitate the replacement of dislocated joints as they improve conditions during fracture reduction. It is important to emphasize that in a patient with muscle weakness neuromuscular blocking agents may result in intraoperative and early postoperative cardiovascular and respiratory complications, as well as prolonged recovery from anesthesia. This should not appear in patients with recent spine fracture or luxation. Therefore it is believed that neuromuscular blockers could be useful during spine stabilization procedures.Keywords: anesthesia, dog, neuromuscular block, spine surgery
Procedia PDF Downloads 18054 Differential Survival Rates of Pseudomonas aeruginosa Strains on the Wings of Pantala flavescens
Authors: Banu Pradheepa Kamarajan, Muthusamy Ananthasubramanian
Abstract:
Biofilm forming Pseudomonads occupy the top third position in causing hospital acquired infections. P. aeruginosa is notoriously known for its tendency to develop drug resistance. Major classes of drug such as β-lactams, aminoglycosides, quinolones, and polymyxins are found ineffective against multi-drug resistance Pseudomonas. To combat the infections, rather than administration of a single antibiotic, use of combinations (tobramycin and essential oils from plants and/or silver nanoparticles, chitosan, nitric oxide, cis-2-decenoic acid) in single formulation are suggested to control P. aeruginosa biofilms. Conventional techniques to prevent hospital-acquired implant infections such as coatings with antibiotics, controlled release of antibiotics from the implant material, contact-killing surfaces, coating the implants with functional DNase I and, coating with glycoside hydrolase are being followed. Coatings with bioactive components besides having limited shelf-life, require cold-chain and, are likely to fail when bacteria develop resistance. Recently identified nano-scale physical architectures on the insect wings are expected to have potential bactericidal property. Nanopillars are bactericidal to Staphylococcus aureus, Bacillus subtilis, K. pnuemoniae and few species of Pseudomonas. Our study aims to investigate the survival rate of biofilm forming Pseudomonas aeruginosa strain over non-biofilm forming strain on the nanopillar architecture of dragonfly (Pantala flavescens) wing. Dragonflies were collected near house-hold areas and, insect identification was carried out by the Department of Entomology, Tamilnadu Agricultural University, Coimbatore, India. Two strains of P. aeruginosa such as PAO1 (potent biofilm former) and MTCC 1688 (non-weak biofilm former) were tested against the glass coverslip (control) and wings of dragonfly (test) for 48 h. The wings/glass coverslips were incubated with bacterial suspension in 48-well plate. The plates were incubated at 37 °C under static condition. Bacterial attachment on the nanopillar architecture of the wing surface was visualized using FESEM. The survival rate of P. aeruginosa was tested using colony counting technique and flow cytometry at 0.5 h, 1 h, 2 h, 7 h, 24 h, and 48 h post-incubation. Cell death was analyzed using propidium iodide staining and DNA quantification. The results indicated that the survival rate of non-biofilm forming P. aeruginosa is 0.2 %, whilst that of biofilm former is 45 % on the dragonfly wings at the end of 48 h. The reduction in the survival rate of biofilm and non-biofilm forming P. aeruginosa was 20% and 40% respectively on the wings compared to the glass coverslip. In addition, Fourier Transformed Infrared Radiation was used to study the modification in the surface chemical composition of the wing during bacterial attachment and, post-sonication. This result indicated that the chemical moieties are not involved in the bactericidal property of nanopillars by the conserved characteristic peaks of chitin pre and post-sonication. The nanopillar architecture of the dragonfly wing efficiently deters the survival of non-biofilm forming P. aeruginosa, but not the biofilm forming strain. The study highlights the ability of biofilm formers to survive on wing architecture. Understanding this survival strategy will help in designing the architecture that combats the colonization of biofilm forming pathogens.Keywords: biofilm, nanopillars, Pseudomonas aeruginosa, survival rate
Procedia PDF Downloads 17253 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid
Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero
Abstract:
The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment
Procedia PDF Downloads 14452 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection
Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément
Abstract:
The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars
Procedia PDF Downloads 11551 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task
Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes
Abstract:
For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.Keywords: Alzheimer's disease, keystroke logging, matching, writing process
Procedia PDF Downloads 36550 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic
Abstract:
The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.Keywords: cosmic ray, human dose, solar flare, aviation
Procedia PDF Downloads 20449 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process
Authors: Reyna Singh, David Lokhat, Milan Carsky
Abstract:
The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.Keywords: catalyst, coal, liquefaction, temperature-staged
Procedia PDF Downloads 64648 The Impacts of New Digital Technology Transformation on Singapore Healthcare Sector: Case Study of a Public Hospital in Singapore from a Management Accounting Perspective
Authors: Junqi Zou
Abstract:
As one of the world’s most tech-ready countries, Singapore has initiated the Smart Nation plan to harness the full power and potential of digital technologies to transform the way people live and work, through the more efficient government and business processes, to make the economy more productive. The key evolutions of digital technology transformation in healthcare and the increasing deployment of Internet of Things (IoTs), Big Data, AI/cognitive, Robotic Process Automation (RPA), Electronic Health Record Systems (EHR), Electronic Medical Record Systems (EMR), Warehouse Management System (WMS in the most recent decade have significantly stepped up the move towards an information-driven healthcare ecosystem. The advances in information technology not only bring benefits to patients but also act as a key force in changing management accounting in healthcare sector. The aim of this study is to investigate the impacts of digital technology transformation on Singapore’s healthcare sector from a management accounting perspective. Adopting a Balanced Scorecard (BSC) analysis approach, this paper conducted an exploratory case study of a newly launched Singapore public hospital, which has been recognized as amongst the most digitally advanced healthcare facilities in Asia-Pacific region. Specifically, this study gains insights on how the new technology is changing healthcare organizations’ management accounting from four perspectives under the Balanced Scorecard approach, 1) Financial Perspective, 2) Customer (Patient) Perspective, 3) Internal Processes Perspective, and 4) Learning and Growth Perspective. Based on a thorough review of archival records from the government and public, and the interview reports with the hospital’s CIO, this study finds the improvements from all the four perspectives under the Balanced Scorecard framework as follows: 1) Learning and Growth Perspective: The Government (Ministry of Health) works with the hospital to open up multiple training pathways to health professionals that upgrade and develops new IT skills among the healthcare workforce to support the transformation of healthcare services. 2) Internal Process Perspective: The hospital achieved digital transformation through Project OneCare to integrate clinical, operational, and administrative information systems (e.g., EHR, EMR, WMS, EPIB, RTLS) that enable the seamless flow of data and the implementation of JIT system to help the hospital operate more effectively and efficiently. 3) Customer Perspective: The fully integrated EMR suite enhances the patient’s experiences by achieving the 5 Rights (Right Patient, Right Data, Right Device, Right Entry and Right Time). 4) Financial Perspective: Cost savings are achieved from improved inventory management and effective supply chain management. The use of process automation also results in a reduction of manpower costs and logistics cost. To summarize, these improvements identified under the Balanced Scorecard framework confirm the success of utilizing the integration of advanced ICT to enhance healthcare organization’s customer service, productivity efficiency, and cost savings. Moreover, the Big Data generated from this integrated EMR system can be particularly useful in aiding management control system to optimize decision making and strategic planning. To conclude, the new digital technology transformation has moved the usefulness of management accounting to both financial and non-financial dimensions with new heights in the area of healthcare management.Keywords: balanced scorecard, digital technology transformation, healthcare ecosystem, integrated information system
Procedia PDF Downloads 16147 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation
Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy
Abstract:
The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis
Procedia PDF Downloads 40546 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 38545 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories
Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy
Abstract:
Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography
Procedia PDF Downloads 18944 A Comparison of Videography Tools and Techniques in African and International Contexts
Authors: Enoch Ocran
Abstract:
Film Pertinence maintains consistency in storytelling by sustaining the natural flow of action while evoking a particular feeling or emotion from the viewers with selected motion pictures. This study presents a thorough investigation of "Film Pertinence" in videography that examines its influence in Africa and around the world. This research delves into the dynamic realm of visual storytelling through film, with a specific focus on the concept of Film Pertinence (FP). The study’s primary objectives are to conduct a comparative analysis of videography tools and techniques employed in both African and international contexts, examining how they contribute to the achievement of organizational goals and the enhancement of cultural awareness. The research methodology includes a comprehensive literature review, interviews with videographers from diverse backgrounds in Africa and the international arena, and the examination of pertinent case studies. The investigation aims to elucidate the multifaceted nature of videographic practices, with particular attention to equipment choices, visual storytelling techniques, cultural sensitivity, and adaptability. This study explores the impact of cultural differences on videography choices, aiming to promote understanding between African and foreign filmmakers and create more culturally sensitive films. It also explores the role of technology in advancing videography practices, resource allocation, and the influence of globalization on local filmmaking practices. The research also contributes to film studies by analyzing videography's impact on storytelling, guiding filmmakers to create more compelling narratives. The findings can inform film education, tailoring curricula to regional needs and opportunities. The study also encourages cross-cultural collaboration in the film industry by highlighting convergence and divergence in videography practices. At its core, this study seeks to explore the implications of film pertinence as a framework for videographic practice. It scrutinizes how cultural expression, education, and storytelling transcend geographical boundaries on a global scale. By analyzing the interplay between tools, techniques, and context, the research illuminates the ways in which videographers in Africa and worldwide apply film Pertinence principles to achieve cross-cultural communication and effectively capture the objectives of their clients. One notable focus of this paper is on the techniques employed by videographers in West Africa to emphasize storytelling and participant engagement, showcasing the relevance of FP in highlighting cultural awareness in visual storytelling. Additionally, the study highlights the prevalence of film pertinence in African agricultural documentaries produced for esteemed organizations such as the Roundtable on Sustainable Palm Oil (RSPO), Proforest, World Food Program, Fidelity Bank Ghana, Instituto BVRio, Aflatoun International, and the Solidaridad Network. These documentaries serve to promote prosperity, resilience, human rights, sustainable farming practices, community respect, and environmental preservation, underlining the vital role of film in conveying these critical messages. In summary, this research offers valuable insights into the evolving landscape of videography in different contexts, emphasizing the significance of film pertinence as a unifying principle in the pursuit of effective visual storytelling and cross-cultural communication.Keywords: film pertinence, Africa, cultural awareness, videography tools
Procedia PDF Downloads 65