Search results for: storage costs
30 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods
Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak
Abstract:
Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.Keywords: replanting, geospatial, precision agriculture, blueprint
Procedia PDF Downloads 8429 Artificial Intelligence Impact on the Australian Government Public Sector
Authors: Jessica Ho
Abstract:
AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.Keywords: artificial inteligence, machine learning, rules, governance, government
Procedia PDF Downloads 7128 Applying Concept Mapping to Explore Temperature Abuse Factors in the Processes of Cold Chain Logistics Centers
Authors: Marco F. Benaglia, Mei H. Chen, Kune M. Tsai, Chia H. Hung
Abstract:
As societal and family structures, consumer dietary habits, and awareness about food safety and quality continue to evolve in most developed countries, the demand for refrigerated and frozen foods has been growing, and the issues related to their preservation have gained increasing attention. A well-established cold chain logistics system is essential to avoid any temperature abuse; therefore, assessing potential disruptions in the operational processes of cold chain logistics centers becomes pivotal. This study preliminarily employs HACCP to find disruption factors in cold chain logistics centers that may cause temperature abuse. Then, concept mapping is applied: selected experts engage in brainstorming sessions to identify any further factors. The panel consists of ten experts, including four from logistics and home delivery, two from retail distribution, one from the food industry, two from low-temperature logistics centers, and one from the freight industry. Disruptions include equipment-related aspects, human factors, management aspects, and process-related considerations. The areas of observation encompass freezer rooms, refrigerated storage areas, loading docks, sorting areas, and vehicle parking zones. The experts also categorize the disruption factors based on perceived similarities and build a similarity matrix. Each factor is evaluated for its impact, frequency, and investment importance. Next, multiple scale analysis, cluster analysis, and other methods are used to analyze these factors. Simultaneously, key disruption factors are identified based on their impact and frequency, and, subsequently, the factors that companies prioritize and are willing to invest in are determined by assessing investors’ risk aversion behavior. Finally, Cumulative Prospect Theory (CPT) is applied to verify the risk patterns. 66 disruption factors are found and categorized into six clusters: (1) "Inappropriate Use and Maintenance of Hardware and Software Facilities", (2) "Inadequate Management and Operational Negligence", (3) "Product Characteristics Affecting Quality and Inappropriate Packaging", (4) "Poor Control of Operation Timing and Missing Distribution Processing", (5) "Inadequate Planning for Peak Periods and Poor Process Planning", and (6) "Insufficient Cold Chain Awareness and Inadequate Training of Personnel". This study also identifies five critical factors in the operational processes of cold chain logistics centers: "Lack of Personnel’s Awareness Regarding Cold Chain Quality", "Personnel Not Following Standard Operating Procedures", "Personnel’s Operational Negligence", "Management’s Inadequacy", and "Lack of Personnel’s Knowledge About Cold Chain". The findings show that cold chain operators prioritize prevention and improvement efforts in the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster, particularly focusing on the factors of "Temperature Setting Errors" and "Management’s Inadequacy". However, through the application of CPT theory, this study reveals that companies are not usually willing to invest in the improvement of factors related to the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster due to its low occurrence likelihood, but they acknowledge the severity of the consequences if it does occur. Hence, the main implication is that the key disruption factors in cold chain logistics centers’ processes are associated with personnel issues; therefore, comprehensive training, periodic audits, and the establishment of reasonable incentives and penalties for both new employees and managers may significantly reduce disruption issues.Keywords: concept mapping, cold chain, HACCP, cumulative prospect theory
Procedia PDF Downloads 7027 Biotech Processes to Recover Valuable Fraction from Buffalo Whey Usable in Probiotic Growth, Cosmeceutical, Nutraceutical and Food Industries
Authors: Alberto Alfano, Sergio D’ambrosio, Darshankumar Parecha, Donatella Cimini, Chiara Schiraldi.
Abstract:
The main objective of this study regards the setup of an efficient small-scale platform for the conversion of local renewable waste materials, such as whey, into added-value products, thereby reducing environmental impact and costs deriving from the disposal of processing waste products. The buffalo milk whey derived from the cheese-making process, called second cheese whey, is the main by-product of the dairy industry. Whey is the main and most polluting by-product obtained from cheese manufacturing consisting of lactose, lactic acid, proteins, and salts, making whey an added-value product. In Italy, and in particular, in the Campania region, soft cheese production needs a large volume of liquid waste, especially during late spring and summer. This project is part of a circular economy perspective focused on the conversion of potentially polluting and difficult to purify waste into a resource to be exploited, and it embodies the concept of the three “R”: reduce, recycle, and reuse. Special focus was paid to the production of health-promoting biomolecules and biopolymers, which may be exploited in different segments of the food and pharmaceutical industries. These biomolecules may be recovered through appropriate processes and reused in an attempt to obtain added value products. So, ultrafiltration and nanofiltration processes were performed to fractionate bioactive components starting from buffalo milk whey. In this direction, the present study focused on the implementation of a downstream process that converts waste generated from food and food processing industries into added value products with potential applications. Owing to innovative downstream and biotechnological processes, rather than a waste product may be considered a resource to obtain high added value products, such as food supplements (probiotics), cosmeceuticals, biopolymers, and recyclable purified water. Besides targeting gastrointestinal disorders, probiotics such as Lactobacilli have been reported to improve immunomodulation and protection of the host against infections caused by viral and bacterial pathogens. Interestingly, also inactivated microbial (probiotic) cells and their metabolic products, indicated as parabiotic and postbiotics, respectively, have a crucial role and act as mediators in the modulation of the host’s immune function. To boost the production of biomass (both viable and/or heat inactivated cells) and/or the synthesis of growth-related postbiotics, such as EPS, efficient and sustainable fermentation processes are necessary. Based on a “zero-waste” approach, wastes generated from local industries can be recovered and recycled to develop sustainable biotechnological processes to obtain probiotics as well as post and parabiotic, to be tested as bioactive compounds against gastrointestinal disorders. The results have shown it was possible to recover an ultrafiltration retentate with suitable characteristics to be used in skin dehydration, to perform films (i.e., packaging for food industries), or as a wound repair agent and a nanofiltration retentate to recover lactic acid and carbon sources (e.g., lactose, glucose..) used for microbial cultivation. On the side, the last goal is to obtain purified water that can be reused throughout the process. In fact, water reclamation and reuse provide a unique and viable opportunity to augment traditional water supplies, a key issue nowadays.Keywords: biotech process, downstream process, probiotic growth, from waste to product, buffalo whey
Procedia PDF Downloads 6926 Mobi-DiQ: A Pervasive Sensing System for Delirium Risk Assessment in Intensive Care Unit
Authors: Subhash Nerella, Ziyuan Guan, Azra Bihorac, Parisa Rashidi
Abstract:
Intensive care units (ICUs) provide care to critically ill patients in severe and life-threatening conditions. However, patient monitoring in the ICU is limited by the time and resource constraints imposed on healthcare providers. Many critical care indices such as mobility are still manually assessed, which can be subjective, prone to human errors, and lack granularity. Other important aspects, such as environmental factors, are not monitored at all. For example, critically ill patients often experience circadian disruptions due to the absence of effective environmental “timekeepers” such as the light/dark cycle and the systemic effect of acute illness on chronobiologic markers. Although the occurrence of delirium is associated with circadian disruption risk factors, these factors are not routinely monitored in the ICU. Hence, there is a critical unmet need to develop systems for precise and real-time assessment through novel enabling technologies. We have developed the mobility and circadian disruption quantification system (Mobi-DiQ) by augmenting biomarker and clinical data with pervasive sensing data to generate mobility and circadian cues related to mobility, nightly disruptions, and light and noise exposure. We hypothesize that Mobi-DiQ can provide accurate mobility and circadian cues that correlate with bedside clinical mobility assessments and circadian biomarkers, ultimately important for delirium risk assessment and prevention. The collected multimodal dataset consists of depth images, Electromyography (EMG) data, patient extremity movement captured by accelerometers, ambient light levels, Sound Pressure Level (SPL), and indoor air quality measured by volatile organic compounds, and the equivalent CO₂ concentration. For delirium risk assessment, the system recognizes mobility cues (axial body movement features and body key points) and circadian cues, including nightly disruptions, ambient SPL, and light intensity, as well as other environmental factors such as indoor air quality. The Mobi-DiQ system consists of three major components: the pervasive sensing system, a data storage and analysis server, and a data annotation system. For data collection, six local pervasive sensing systems were deployed, including a local computer and sensors. A video recording tool with graphical user interface (GUI) developed in python was used to capture depth image frames for analyzing patient mobility. All sensor data is encrypted, then automatically uploaded to the Mobi-DiQ server through a secured VPN connection. Several data pipelines are developed to automate the data transfer, curation, and data preparation for annotation and model training. The data curation and post-processing are performed on the server. A custom secure annotation tool with GUI was developed to annotate depth activity data. The annotation tool is linked to the MongoDB database to record the data annotation and to provide summarization. Docker containers are also utilized to manage services and pipelines running on the server in an isolated manner. The processed clinical data and annotations are used to train and develop real-time pervasive sensing systems to augment clinical decision-making and promote targeted interventions. In the future, we intend to evaluate our system as a clinical implementation trial, as well as to refine and validate it by using other data sources, including neurological data obtained through continuous electroencephalography (EEG).Keywords: deep learning, delirium, healthcare, pervasive sensing
Procedia PDF Downloads 9325 Northern Nigeria Vaccine Direct Delivery System
Authors: Evelyn Castle, Adam Thompson
Abstract:
Background: In 2013, the Kano State Primary Health Care Management Board redesigned its Routine immunization supply chain from diffused pull to direct delivery push. It addressed issues around stockouts and reduced time spent by health facility staff collecting, and reporting on vaccine usage. The health care board sought the help of a 3PL for twice-monthly deliveries from its cold store to 484 facilities across 44 local governments. eHA’s Health Delivery Systems group formed a 3PL to serve 326 of these new facilities in partnership with the State. We focused on designing and implementing a technology system throughout. Basic methodologies: GIS Mapping: - Planning the delivery of vaccines to hundreds of health facilities requires detailed route planning for delivery vehicles. Mapping the road networks across Kano and Bauchi with a custom routing tool provided information for the optimization of deliveries. Reducing the number of kilometers driven each round by 20%, - reducing cost and delivery time. Direct Delivery Information System: - Vaccine Direct Deliveries are facilitated through pre-round planning (driven by health facility database, extensive GIS, and inventory workflow rules), manager and driver control panel customizing delivery routines and reporting, progress dashboard, schedules/routes, packing lists, delivery reports, and driver data collection applications. Move: Last Mile Logistics Management System: - MOVE has improved vaccine supply information management to be timely, accurate and actionable. Provides stock management workflow support, alerts management for cold chain exceptions/stock outs, and on-device analytics for health and supply chain staff. Software was built to be offline-first with user-validated interface and experience. Deployed to hundreds of vaccine storage site the improved information tools helps facilitate the process of system redesign and change management. Findings: - Stock-outs reduced from 90% to 33% - Redesigned current health systems and managing vaccine supply for 68% of Kano’s wards. - Near real time reporting and data availability to track stock. - Paperwork burdens of health staff have been dramatically reduced. - Medicine available when the community needs it. - Consistent vaccination dates for children under one to prevent polio, yellow fever, tetanus. - Higher immunization rates = Lower infection rates. - Hundreds of millions of Naira worth of vaccines successfully transported. - Fortnightly service to 326 facilities in 326 wards across 30 Local Government areas. - 6,031 cumulative deliveries. - Over 3.44 million doses transported. - Minimum travel distance covered in a round of delivery is 2000 kms & maximum of 6297 kms. - 153,409 kms travelled by 6 drivers. - 500 facilities in 326 wards. - Data captured and synchronized for the first time. - Data driven decision making now possible. Conclusion: eHA’s Vaccine Direct delivery has met challenges in Kano and Bauchi State and provided a reliable delivery service of vaccinations that ensure t health facilities can run vaccination clinics for children under one. eHA uses innovative technology that delivers vaccines from Northern Nigerian zonal stores straight to healthcare facilities. Helped healthcare workers spend less time managing supplies and more time delivering care, and will be rolled out nationally across Nigeria.Keywords: direct delivery information system, health delivery system, GIS mapping, Northern Nigeria, vaccines
Procedia PDF Downloads 37424 Human Bone Marrow Stem Cell Behavior on 3D Printed Scaffolds as Trabecular Bone Grafts
Authors: Zeynep Busra Velioglu, Deniz Pulat, Beril Demirbakan, Burak Ozcan, Ece Bayrak, Cevat Erisken
Abstract:
Bone tissue has the ability to perform a wide array of functions including providing posture, load-bearing capacity, protection for the internal organs, initiating hematopoiesis, and maintaining the homeostasis of key electrolytes via calcium/phosphate ion storage. The most common cause for bone defects is extensive trauma and subsequent infection. Bone tissue has the self-healing capability without a scar tissue formation for the majority of the injuries. However, some may result with delayed union or fracture non-union. Such cases include reconstruction of large bone defects or cases of compromised regenerative process as a result of avascular necrosis and osteoporosis. Several surgical methods exist to treat bone defects, including Ilizarov method, Masquelete technique, growth factor stimulation, and bone replacement. Unfortunately, these are technically demanding and come with noteworthy disadvantages such as lengthy treatment duration, adverse effects on the patient’s psychology, repeated surgical procedures, and often long hospitalization times. These limitations associated with surgical techniques make bone substitutes an attractive alternative. Here, it was hypothesized that a 3D printed scaffold will mimic trabecular bone in terms of biomechanical properties and that such scaffolds will support cell attachment and survival. To test this hypothesis, this study aimed at fabricating poly(lactic acid), PLA, structures using 3D printing technology for trabecular bone defects, characterizing the scaffolds and comparing with bovine trabecular bone. Capacity of scaffolds on human bone marrow stem cell (hBMSC) attachment and survival was also evaluated. Cubes with a volume of 1 cm³ having pore sizes of 0.50, 1.00 and 1.25 mm were printed. The scaffolds/grafts were characterized in terms of porosity, contact angle, compressive mechanical properties as well cell response. Porosities of the 3D printed scaffolds were calculated based on apparent densities. For contact angles, 50 µl distilled water was dropped over the surface of scaffolds, and contact angles were measured using ‘Image J’ software. Mechanical characterization under compression was performed on scaffolds and native trabecular bone (bovine, 15 months) specimens using a universal testing machine at a rate of 0.5mm/min. hBMSCs were seeded onto the 3D printed scaffolds. After 3 days of incubation with fully supplemented Dulbecco’s modified Eagle’s medium, the cells were fixed using 2% formaldehyde and glutaraldehyde mixture. The specimens were then imaged under scanning electron microscopy. Cell proliferation was determined by using EZQuant dsDNA Quantitation kit. Fluorescence was measured using microplate reader Spectramax M2 at the excitation and emission wavelengths of 485nm and 535nm, respectively. Findings suggested that porosity of scaffolds with pore dimensions of 0.5mm, 1.0mm and 1.25mm were not affected by pore size, while contact angle and compressive modulus decreased with increasing pore size. Biomechanical characterization of trabecular bone yielded higher modulus values as compared to scaffolds with all pore sizes studied. Cells attached and survived in all surfaces, demonstrating higher proliferation on scaffolds with 1.25mm pores as compared with those of 1mm. Collectively, given lower mechanical properties of scaffolds as compared to native bone, and biocompatibility of the scaffolds, the 3D printed PLA scaffolds of this study appear as candidate substitutes for bone repair and regeneration.Keywords: 3D printing, biomechanics, bone repair, stem cell
Procedia PDF Downloads 17423 Successful Optimization of a Shallow Marginal Offshore Field and Its Applications
Authors: Kumar Satyam Das, Murali Raghunathan
Abstract:
This note discusses the feasibility of field development of a challenging shallow offshore field in South East Asia and how its learnings can be applied to marginal field development across the world especially developing marginal fields in this low oil price world. The field was found to be economically challenging even during high oil prices and the project was put on hold. Shell started development study with the aim to significantly reduce cost through competitively scoping and revive stranded projects. The proposed strategy to achieve this involved Improve Per platform recovery and Reduction in CAPEX. Methodology: Based on various Benchmarking Tool such as Woodmac for similar projects in the region and economic affordability, a challenging target of 50% reduction in unit development cost (UDC) was set for the project. Technical scope was defined to the minimum as to be a wellhead platform with minimum functionality to ensure production. The evaluation of key project decisions like Well location and number, well design, Artificial lift methods and wellhead platform type under different development concept was carried out through integrated multi-discipline approach. Key elements influencing per platform recovery were Wellhead Platform (WHP) location, Well count, well reach and well productivity. Major Findings: Reservoir being shallow posed challenges in well design (dog-leg severity, casing size and the achievable step-out), choice of artificial lift and sand-control method. Integrated approach amongst relevant disciplines with challenging mind-set enabled to achieve optimized set of development decisions. This led to significant improvement in per platform recovery. It was concluded that platform recovery largely depended on the reach of the well. Choice of slim well design enabled designing of high inclination and better productivity wells. However, there is trade-off between high inclination Gas Lift (GL) wells and low inclination wells in terms of long term value, operational complexity, well reach, recovery and uptime. Well design element like casing size, well completion, artificial lift and sand control were added successively over the minimum technical scope design leading to a value and risk staircase. Logical combinations of options (slim well, GL) were competitively screened to achieve 25% reduction in well cost. Facility cost reduction was achieved through sourcing standardized Low Cost Facilities platform in combination with portfolio execution to maximizing execution efficiency; this approach is expected to reduce facilities cost by ~23% with respect to the development costs. Further cost reductions were achieved by maximizing use of existing facilities nearby; changing reliance on existing water injection wells and utilizing existing water injector (W.I.) platform for new injectors. Conclusion: The study provides a spectrum of technically feasible options. It also made clear that different drivers lead to different development concepts and the cost value trade off staircase made this very visible. Scoping of the project through competitive way has proven to be valuable for decision makers by creating a transparent view of value and associated risks/uncertainty/trade-offs for difficult choices: elements of the projects can be competitive, whilst other parts will struggle, even though contributing to significant volumes. Reduction in UDC through proper scoping of present projects and its benchmarking paves as a learning for the development of marginal fields across the world, especially in this low oil price scenario. This way of developing a field has on average a reduction of 40% of cost for the Shell projects.Keywords: benchmarking, full field development, CAPEX, feasibility
Procedia PDF Downloads 15922 Settings of Conditions Leading to Reproducible and Robust Biofilm Formation in vitro in Evaluation of Drug Activity against Staphylococcal Biofilms
Authors: Adela Diepoltova, Klara Konecna, Ondrej Jandourek, Petr Nachtigal
Abstract:
A loss of control over antibiotic-resistant pathogens has become a global issue due to severe and often untreatable infections. This state is reflected in complicated treatment, health costs, and higher mortality. All these factors emphasize the urgent need for the discovery and development of new anti-infectives. One of the most common pathogens mentioned in the phenomenon of antibiotic resistance are bacteria of the genus Staphylococcus. These bacterial agents have developed several mechanisms against the effect of antibiotics. One of them is biofilm formation. In staphylococci, biofilms are associated with infections such as endocarditis, osteomyelitis, catheter-related bloodstream infections, etc. To author's best knowledge, no validated and standardized methodology evaluating candidate compound activity against staphylococcal biofilms exists. However, a variety of protocols for in vitro drug activity testing has been suggested, yet there are often fundamental differences. Based on our experience, a key methodological step that leads to credible results is to form a robust biofilm with appropriate attributes such as firm adherence to the substrate, a complex arrangement in layers, and the presence of extracellular polysaccharide matrix. At first, for the purpose of drug antibiofilm activity evaluation, the focus was put on various conditions (supplementation of cultivation media by human plasma/fetal bovine serum, shaking mode, the density of initial inoculum) that should lead to reproducible and robust in vitro staphylococcal biofilm formation in microtiter plate model. Three model staphylococcal reference strains were included in the study: Staphylococcus aureus (ATCC 29213), methicillin-resistant Staphylococcus aureus (ATCC 43300), and Staphylococcus epidermidis (ATCC 35983). The total biofilm biomass was quantified using the Christensen method with crystal violet, and results obtained from at least three independent experiments were statistically processed. Attention was also paid to the viability of the biofilm-forming staphylococcal cells and the presence of extracellular polysaccharide matrix. The conditions that led to robust biofilm biomass formation with attributes for biofilms mentioned above were then applied by introducing an alternative method analogous to the commercially available test system, the Calgary Biofilm Device. In this test system, biofilms are formed on pegs that are incorporated into the lid of the microtiter plate. This system provides several advantages (in situ detection and quantification of biofilm microbial cells that have retained their viability after drug exposure). Based on our preliminary studies, it was found that the attention to the peg surface and substrate on which the bacterial biofilms are formed should also be paid to. Therefore, further steps leading to the optimization were introduced. The surface of pegs was coated by human plasma, fetal bovine serum, and L-polylysine. Subsequently, the willingness of bacteria to adhere and form biofilm was monitored. In conclusion, suitable conditions were revealed, leading to the formation of reproducible, robust staphylococcal biofilms in vitro for the microtiter model and the system analogous to the Calgary biofilm device, as well. The robustness and typical slime texture could be detected visually. Likewise, an analysis by confocal laser scanning microscopy revealed a complex three-dimensional arrangement of biofilm forming organisms surrounded by an extracellular polysaccharide matrix.Keywords: anti-biofilm drug activity screening, in vitro biofilm formation, microtiter plate model, the Calgary biofilm device, staphylococcal infections, substrate modification, surface coating
Procedia PDF Downloads 15621 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 7920 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases
Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo
Abstract:
The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis
Procedia PDF Downloads 22319 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter
Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski
Abstract:
Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter
Procedia PDF Downloads 15918 Development of a Core Set of Clinical Indicators to Measure Quality of Care for Thyroid Cancer: A Modified-Delphi Approach
Authors: Liane J. Ioannou, Jonathan Serpell, Cino Bendinelli, David Walters, Jenny Gough, Dean Lisewski, Win Meyer-Rochow, Julie Miller, Duncan Topliss, Bill Fleming, Stephen Farrell, Andrew Kiu, James Kollias, Mark Sywak, Adam Aniss, Linda Fenton, Danielle Ghusn, Simon Harper, Aleksandra Popadich, Kate Stringer, David Watters, Susannah Ahern
Abstract:
BACKGROUND: There are significant variations in the management, treatment and outcomes of thyroid cancer, particularly in the role of: diagnostic investigation and pre-treatment scanning; optimal extent of surgery (total or hemi-thyroidectomy); use of active surveillance for small low-risk cancers; central lymph node dissections (therapeutic or prophylactic); outcomes following surgery (e.g. recurrent laryngeal nerve palsy, hypocalcaemia, hypoparathyroidism); post-surgical hormone, calcium and vitamin D therapy; and provision and dosage of radioactive iodine treatment. A proven strategy to reduce variations in the outcome and to improve survival is to measure and compare it using high-quality clinical registry data. Clinical registries provide the most effective means of collecting high-quality data and are a tool for quality improvement. Where they have been introduced at a state or national level, registries have become one of the most clinically valued tools for quality improvement. To benchmark clinical care, clinical quality registries require systematic measurement at predefined intervals and the capacity to report back information to participating clinical units. OBJECTIVE: The aim of this study was to develop a core set clinical indicators that enable measurement and reporting of quality of care for patients with thyroid cancer. We hypothesise that measuring clinical quality indicators, developed to identify differences in quality of care across sites, will reduce variation and improve patient outcomes and survival, thereby lessening costs and healthcare burden to the Australian community. METHOD: Preparatory work and scoping was conducted to identify existing high quality, clinical guidelines and best practice for thyroid cancer both nationally and internationally, as well as relevant literature. A bi-national panel was invited to participate in a modified Delphi process. Panelists were asked to rate each proposed indicator on a Likert scale of 1–9 in a three-round iterative process. RESULTS: A total of 236 potential quality indicators were identified. One hundred and ninety-two indicators were removed to reflect the data capture by the Australian and New Zealand Thyroid Cancer Registry (ANZTCR) (from diagnosis to 90-days post-surgery). The remaining 44 indicators were presented to the panelists for voting. A further 21 indicators were later added by the panelists bringing the total potential quality indicators to 65. Of these, 21 were considered the most important and feasible indicators to measure quality of care in thyroid cancer, of which 12 were recommended for inclusion in the final set. The consensus indicator set spans the spectrum of care, including: preoperative; surgery; surgical complications; staging and post-surgical treatment planning; and post-surgical treatment. CONCLUSIONS: This study provides a core set of quality indicators to measure quality of care in thyroid cancer. This indicator set can be applied as a tool for internal quality improvement, comparative quality reporting, public reporting and research. Inclusion of these quality indicators into monitoring databases such as clinical quality registries will enable opportunities for benchmarking and feedback on best practice care to clinicians involved in the management of thyroid cancer.Keywords: clinical registry, Delphi survey, quality indicators, quality of care
Procedia PDF Downloads 18117 Feasibility and Acceptability of an Emergency Department Digital Pain Self-Management Intervention: An Randomized Controlled Trial Pilot Study
Authors: Alexandria Carey, Angela Starkweather, Ann Horgas, Hwayoung Cho, Jason Beneciuk
Abstract:
Background/Significance: Over 3.4 million acute axial low back pain (aLBP) cases are treated annually in the United States (US) emergency departments (ED). ED patients with aLBP receive varying verbal and written discharge routine care (RC), leading to ineffective patient self-management. Ineffective self-management increase chronic low back pain (cLPB) transition risks, a chief cause of worldwide disability, with associated costs >$60 million annually. This research addresses this significant problem by evaluating an ED digital pain self-management intervention (EDPSI) focused on improving self-management through improved knowledge retainment, skills, and self-efficacy (confidence) (KSC) thus reducing aLBP to cLBP transition in ED patients discharged with aLBP. The research has significant potential to increase self-efficacy, one of the most potent mechanisms of behavior change and improve health outcomes. Focusing on accessibility and usability, the intervention may reduce discharge disparities in aLBP self-management, especially with low health literacy. Study Questions: This research will answer the following questions: 1) Will an EDPSI focused on improving KSC progress patient self-management behaviors and health status?; 2) Is the EDPSI sustainable to improve pain severity, interference, and pain recurrence?; 3) Will an EDPSI reduce aLBP to cLBP transition in patients discharged with aLBP? Aims: The pilot randomized-controlled trial (RCT) study’s objectives assess the effects of a 12-week digital self-management discharge tool in patients with aLBP. We aim to 1) Primarily assess the feasibility [recruitment, enrollment, and retention], and [intervention] acceptability, and sustainability of EDPSI on participant’s pain self-management; 2) Determine the effectiveness and sustainability of EDPSI on pain severity/interference among participants. 3) Explore patient preferences, health literacy, and changes among participants experiencing the transition to cLBP. We anticipate that EDPSI intervention will increase likelihood of achieving self-management milestones and significantly improve pain-related symptoms in aLBP. Methods: The study uses a two-group pilot RCT to enroll 30 individuals who have been seen in the ED with aLBP. Participants are randomized into RC (n=15) or RC + EDPSI (n=15) and receive follow-up surveys for 12-weeks post-intervention. EDPSI innovative content focuses on 1) highlighting discharge education; 2) provides self-management treatment options; 3) actor demonstration of ergonomics, range of motion movements, safety, and sleep; 4) complementary alternative medicine (CAM) options including acupuncture, yoga, and Pilates; 5) combination therapies including thermal application, spinal manipulation, and PT treatments. The intervention group receives Booster sessions via Zoom to assess and reinforce their knowledge retention of techniques and provide return demonstration reinforcing ergonomics, in weeks two and eight. Outcome Measures: All participants are followed for 12-weeks, assessing pain severity/ interference using the Brief Pain Inventory short-form (BPI-sf) survey, self-management (measuring KSC) using the short 13-item Patient Activation Measure (PAM), and self-efficacy using the Pain Self-Efficacy Questionnaire (PSEQ) weeks 1, 6, and 12. Feasibility is measured by recruitment, enrollment, and retention percentages. Acceptability and education satisfaction are measured using the Education-Preference and Satisfaction Questionnaire (EPSQ) post-intervention. Self-management sustainment is measured including PSEQ, PAM, and patient satisfaction and healthcare utilization (PSHU) requesting patient overall satisfaction, additional healthcare utilization, and pain management related to continued back pain or complications post-injury.Keywords: digital, pain self-management, education, tool
Procedia PDF Downloads 5316 Introducing Global Navigation Satellite System Capabilities into IoT Field-Sensing Infrastructures for Advanced Precision Agriculture Services
Authors: Savvas Rogotis, Nikolaos Kalatzis, Stergios Dimou-Sakellariou, Nikolaos Marianos
Abstract:
As precision holds the key for the introduction of distinct benefits in agriculture (e.g., energy savings, reduced labor costs, optimal application of inputs, improved products, and yields), it steadily becomes evident that new initiatives should focus on rendering Precision Agriculture (PA) more accessible to the average farmer. PA leverages on technologies such as the Internet of Things (IoT), earth observation, robotics and positioning systems (e.g., the Global Navigation Satellite System – GNSS - as well as individual positioning systems like GPS, Glonass, Galileo) that allow: from simple data georeferencing to optimal navigation of agricultural machinery to even more complex tasks like Variable Rate Applications. An identified customer pain point is that, from one hand, typical triangulation-based positioning systems are not accurate enough (with errors up to several meters), while on the other hand, high precision positioning systems reaching centimeter-level accuracy, are very costly (up to thousands of euros). Within this paper, a Ground-Based Augmentation System (GBAS) is introduced, that can be adapted to any existing IoT field-sensing station infrastructure. The latter should cover a minimum set of requirements, and in particular, each station should operate as a fixed, obstruction-free towards the sky, energy supplying unit. Station augmentation will allow them to function in pairs with GNSS rovers following the differential GNSS base-rover paradigm. This constitutes a key innovation element for the proposed solution that encompasses differential GNSS capabilities into an IoT field-sensing infrastructure. Integrating this kind of information supports the provision of several additional PA beneficial services such as spatial mapping, route planning, and automatic field navigation of unmanned vehicles (UVs). Right at the heart of the designed system, there is a high-end GNSS toolkit with base-rover variants and Real-Time Kinematic (RTK) capabilities. The GNSS toolkit had to tackle all availability, performance, interfacing, and energy-related challenges that are faced for a real-time, low-power, and reliable in the field operation. Specifically, in terms of performance, preliminary findings exhibit a high rover positioning precision that can even reach less than 10-centimeters. As this precision is propagated to the full dataset collection, it enables tractors, UVs, Android-powered devices, and measuring units to deal with challenging real-world scenarios. The system is validated with the help of Gaiatrons, a mature network of agro-climatic telemetry stations with presence all over Greece and beyond ( > 60.000ha of agricultural land covered) that constitutes part of “gaiasense” (www.gaiasense.gr) smart farming (SF) solution. Gaiatrons constantly monitor atmospheric and soil parameters, thus, providing exact fit to operational requirements asked from modern SF infrastructures. Gaiatrons are ultra-low-cost, compact, and energy-autonomous stations with a modular design that enables the integration of advanced GNSS base station capabilities on top of them. A set of demanding pilot demonstrations has been initiated in Stimagka, Greece, an area with a diverse geomorphological landscape where grape cultivation is particularly popular. Pilot demonstrations are in the course of validating the preliminary system findings in its intended environment, tackle all technical challenges, and effectively highlight the added-value offered by the system in action.Keywords: GNSS, GBAS, precision agriculture, RTK, smart farming
Procedia PDF Downloads 11615 Analyzing Spatio-Structural Impediments in the Urban Trafficscape of Kolkata, India
Authors: Teesta Dey
Abstract:
Integrated Transport development with proper traffic management leads to sustainable growth of any urban sphere. Appropriate mass transport planning is essential for the populous cities in third world countries like India. The exponential growth of motor vehicles with unplanned road network is now the common feature of major urban centres in India. Kolkata, the third largest mega city in India, is not an exception of it. The imbalance between demand and supply of unplanned transport services in this city is manifested in the high economic and environmental costs borne by the associated society. With the passage of time, the growth and extent of passenger demand for rapid urban transport has outstripped proper infrastructural planning and causes severe transport problems in the overall urban realm. Hence Kolkata stands out in the world as one of the most crisis-ridden metropolises. The urban transport crisis of this city involves severe traffic congestion, the disparity in mass transport services on changing peripheral land uses, route overlapping, lowering of travel speed and faulty implementation of governmental plans as mostly induced by rapid growth of private vehicles on limited road space with huge carbon footprint. Therefore the paper will critically analyze the extant road network pattern for improving regional connectivity and accessibility, assess the degree of congestion, identify the deviation from demand and supply balance and finally evaluate the emerging alternate transport options as promoted by the government. For this purpose, linear, nodal and spatial transport network have been assessed based on certain selected indices viz. Road Degree, Traffic Volume, Shimbel Index, Direct Bus Connectivity, Average Travel and Waiting Tine Indices, Route Variety, Service Frequency, Bus Intensity, Concentration Analysis, Delay Rate, Quality of Traffic Transmission, Lane Length Duration Index and Modal Mix. Total 20 Traffic Intersection Points (TIPs) have been selected for the measurement of nodal accessibility. Critical Congestion Zones (CCZs) are delineated based on one km buffer zones of each TIP for congestion pattern analysis. A total of 480 bus routes are assessed for identifying the deficiency in network planning. Apart from bus services, the combined effects of other mass and para transit modes, containing metro rail, auto, cab and ferry services, are also analyzed. Based on systematic random sampling method, a total of 1500 daily urban passengers’ perceptions were studied for checking the ground realities. The outcome of this research identifies the spatial disparity among the 15 boroughs of the city with severe route overlapping and congestion problem. North and Central Kolkata-based mass transport services exceed the transport strength of south and peripheral Kolkata. Faulty infrastructural condition, service inadequacy, economic loss and workers’ inefficiency are the most dominant reasons behind the defective mass transport network plan. Hence there is an urgent need to revive the extant road based mass transport system of this city by implementing a holistic management approach by upgrading traffic infrastructure, designing new roads, better cooperation among different mass transport agencies, better coordination of transport and changing land use policies, large increase in funding and finally general passengers’ awareness.Keywords: carbon footprint, critical congestion zones, direct bus connectivity, integrated transport development
Procedia PDF Downloads 27314 Predicting Acceptance and Adoption of Renewable Energy Community solutions: The Prosumer Psychology
Authors: Francois Brambati, Daniele Ruscio, Federica Biassoni, Rebecca Hueting, Alessandra Tedeschi
Abstract:
This research, in the frame of social acceptance of renewable energies and community-based production and consumption models, aims at (1) supporting a data-driven approachable to dealing with climate change and (2) identifying & quantifying the psycho-sociological dimensions and factors that could support the transition from a technology-driven approach to a consumer-driven approach throughout the emerging “prosumer business models.” In addition to the existing Social Acceptance dimensions, this research tries to identify a purely individual psychological fourth dimension to understand processes and factors underling individual acceptance and adoption of renewable energy business models, realizing a Prosumer Acceptance Index. Questionnaire data collection has been performed throughout an online survey platform, combining standardized and ad-hoc questions adapted for the research purposes. To identify the main factors (individual/social) influencing the relation with renewable energy technology (RET) adoption, a Factorial Analysis has been conducted to identify the latent variables that are related to each other, revealing 5 latent psychological factors: Factor 1. Concern about environmental issues: global environmental issues awareness, strong beliefs and pro-environmental attitudes rising concern on environmental issues. Factor 2. Interest in energy sharing: attentiveness to solutions for local community’s collective consumption, to reduce individual environmental impact, sustainably improve the local community, and sell extra energy to the general electricity grid. Factor 3. Concern on climate change: environmental issues consequences on climate change awareness, especially on a global scale level, developing pro-environmental attitudes on global climate change course and sensitivity about behaviours aimed at mitigating such human impact. Factor 4. Social influence: social support seeking from peers. With RET, advice from significant others is looked for internalizing common perceived social norms of the national/geographical region. Factor 5. Impact on bill cost: inclination to adopt a RET when economic incentives from the behaviour perception affect the decision-making process could result in less expensive or unvaried bills. Linear regression has been conducted to identify and quantify the factors that could better predict behavioural intention to become a prosumer. An overall scale measuring “acceptance of a renewable energy solution” was used as the dependent variable, allowing us to quantify the five factors that contribute to measuring: awareness of environmental issues and climate change; environmental attitudes; social influence; and environmental risk perception. Three variables can significantly measure and predict the scores of the “Acceptance in becoming a prosumer” ad hoc scale. Variable 1. Attitude: the agreement to specific environmental issues and global climate change issues of concerns and evaluations towards a behavioural intention. Variable 2. Economic incentive: the perceived behavioural control and its related environmental risk perception, in terms of perceived short-term benefits and long-term costs, both part of the decision-making process as expected outcomes of the behaviour itself. Variable 3. Age: despite fewer economic possibilities, younger adults seem to be more sensitive to environmental dimensions and issues as opposed to older adults. This research can facilitate policymakers and relevant stakeholders to better understand which relevant psycho-sociological factors are intervening in these processes and what and how specifically target when proposing change towards sustainable energy production and consumption.Keywords: behavioural intention, environmental risk perception, prosumer, renewable energy technology, social acceptance
Procedia PDF Downloads 13213 Transforming Emergency Care: Revolutionizing Obstetrics and Gynecology Operations for Enhanced Excellence
Authors: Lolwa Alansari, Hanen Mrabet, Kholoud Khaled, Abdelhamid Azhaghdani, Sufia Athar, Aska Kaima, Zaineb Mhamdia, Zubaria Altaf, Almunzer Zakaria, Tamara Alshadafat
Abstract:
Introduction: The Obstetrics and Gynecology Emergency Department at Alwakra Hospital has faced significant challenges, which have been further worsened by the impact of the COVID-19 pandemic. These challenges involve issues such as overcrowding, extended wait times, and a notable surge in demand for emergency care services. Moreover, prolonged waiting times have emerged as a primary factor contributing to situations where patients leave without receiving attention, known as left without being seen (LWBS), and unexpectedly abscond. Addressing the issue of insufficient patient mobility in the obstetrics and gynecology emergency department has brought about substantial improvements in patient care, healthcare administration, and overall departmental efficiency. These changes have not only alleviated overcrowding but have also elevated the quality of emergency care, resulting in higher patient satisfaction, better outcomes, and operational rewards. Methodology: The COVID-19 pandemic has served as a catalyst for substantial transformations in the obstetrics and gynecology emergency, aligning seamlessly with the strategic direction of Hamad Medical Corporation (HMC). The fundamental aim of this initiative is to revolutionize the operational efficiency of the OB-GYN ED. To accomplish this mission, a range of transformations has been initiated, focusing on essential areas such as digitizing systems, optimizing resource allocation, enhancing budget efficiency, and reducing overall costs. The project utilized the Plan-Do-Study-Act (PDSA) model, involving a diverse team collecting baseline data and introducing throughput improvements. Post-implementation data and feedback were analysed, leading to the integration of effective interventions into standard procedures. These interventions included optimized space utilization, real-time communication, bedside registration, technology integration, pre-triage screening, enhanced communication and patient education, consultant presence, and a culture of continuous improvement. These strategies significantly reduced waiting times, enhancing both patient care and operational efficiency. Results: Results demonstrated a substantial reduction in overall average waiting time, dropping from 35 to approximately 14 minutes by August 2023. The wait times for priority 1 cases have been reduced from 22 to 0 minutes, and for priority 2 cases, the wait times have been reduced from 32 to approximately 13.6 minutes. The proportion of patients spending less than 8 hours in the OB ED observation beds rose from 74% in January 2022 to over 98% in 2023. Notably, there was a remarkable decrease in LWBS and absconded patient rates from 2020 to 2023. Conclusion: The project initiated a profound change in the department's operational environment. Efficiency became deeply embedded in the unit's culture, promoting teamwork among staff that went beyond the project's original focus and had a positive influence on operations in other departments. This effectiveness not only made processes more efficient but also resulted in significant cost reductions for the hospital. These cost savings were achieved by reducing wait times, which in turn led to fewer prolonged patient stays and reduced the need for additional treatments. These continuous improvement initiatives have now become an integral part of the Obstetrics and Gynecology Division's standard operating procedures, ensuring that the positive changes brought about by the project persist and evolve over time.Keywords: overcrowding, waiting time, person centered care, quality initiatives
Procedia PDF Downloads 6512 Extracellular Polymeric Substances Study in an MBR System for Fouling Control
Authors: Dimitra C. Banti, Gesthimani Liona, Petros Samaras, Manasis Mitrakas
Abstract:
Municipal and industrial wastewaters are often treated biologically, by the activated sludge process (ASP). The ASP not only requires large aeration and sedimentation tanks, but also generates large quantities of excess sludge. An alternative technology is the membrane bioreactor (MBR), which replaces two stages of the conventional ASP—clarification and settlement—with a single, integrated biotreatment and clarification step. The advantages offered by the MBR over conventional treatment include reduced footprint and sludge production through maintaining a high biomass concentration in the bioreactor. Notwithstanding these advantages, the widespread application of the MBR process is constrained by membrane fouling. Fouling leads to permeate flux decline, making more frequent membrane cleaning and replacement necessary and resulting to increased operating costs. In general, membrane fouling results from the interaction between the membrane material and the components in the activated sludge liquor. The latter includes substrate components, cells, cell debris and microbial metabolites, such as Extracellular Polymeric Substances (EPS) and Sludge Microbial Products (SMPs). The challenge for effective MBR operation is to minimize the rate of Transmembrane Pressure (TMP) increase. This can be achieved by several ways, one of which is the addition of specific additives, that enhance the coagulation and flocculation of compounds, which are responsible for fouling, hence reducing biofilm formation on the membrane surface and limiting the fouling rate. In this project the effectiveness of a non-commercial composite coagulant was studied as an agent for fouling control in a lab scale MBR system consisting in two aerated tanks. A flat sheet membrane module with 0.40 um pore size was submerged into the second tank. The system was fed by50 L/d of municipal wastewater collected from the effluent of the primary sedimentation basin. The TMP increase rate, which is directly related to fouling growth, was monitored by a PLC system. EPS, MLSS and MLVSS measurements were performed in samples of mixed liquor; in addition, influent and effluent samples were collected for the determination of physicochemical characteristics (COD, BOD5, NO3-N, NH4-N, Total N and PO4-P). The coagulant was added in concentrations 2, 5 and 10mg/L during a period of 2 weeks and the results were compared with the control system (without coagulant addition). EPS fractions were extracted by a three stages physical-thermal treatment allowing the identification of Soluble EPS (SEPS) or SMP, Loosely Bound EPS (LBEPS) and Tightly Bound EPS (TBEPS). Proteins and carbohydrates concentrations were measured in EPS fractions by the modified Lowry method and Dubois method, respectively. Addition of 2 mg/L coagulant concentration did not affect SEPS proteins in comparison with control process and their values varied between 32 to 38mg/g VSS. However a coagulant dosage of 5mg/L resulted in a slight increase of SEPS proteins at 35-40 mg/g VSS while 10mg/L coagulant further increased SEPS to 44-48mg/g VSS. Similar results were obtained for SEPS carbohydrates. Carbohydrates values without coagulant addition were similar to the corresponding values measured for 2mg/L coagulant; the addition of mg/L coagulant resulted to a slight increase of carbohydrates SEPS to 6-7mg/g VSS while a dose of 10 mg/L further increased carbohydrates content to 9-10mg/g VSS. Total LBEPS and TBEPS, consisted of proteins and carbohydrates of LBEPS and TBEPS respectively, presented similar variations by the addition of the coagulant. Total LBEPS at 2mg/L dose were almost equal to 17mg/g VSS, and their values increased to 22 and 29 mg/g VSS during the addition of 5 mg/L and 10 mg/L of coagulant respectively. Total TBEPS were almost 37 mg/g VSS at a coagulant dose of 2 mg/L and increased to 42 and 51 mg/g VSS at 5 mg/L and 10 mg/L doses, respectively. Therefore, it can be concluded that coagulant addition could potentially affect microorganisms activities, excreting EPS in greater amounts. Nevertheless, EPS increase, mainly SEPS increase, resulted to a higher membrane fouling rate, as justified by the corresponding TMP increase rate. However, the addition of the coagulant, although affected the EPS content in the reactor mixed liquor, did not change the filtration process: an effluent of high quality was produced, with COD values as low as 20-30 mg/L.Keywords: extracellular polymeric substances, MBR, membrane fouling, EPS
Procedia PDF Downloads 26811 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping
Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung
Abstract:
Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 25710 Turn Organic Waste to Green Fuels with Zero Landfill
Authors: Xu Fei (Philip) WU
Abstract:
As waste recycling concept been accepted more and more in modern societies, the organic portion of the municipal waste become a sires issue in today’s life. Depend on location and season, the organic waste can bee anywhere between 40-65% of total municipal solid waste. Also composting and anaerobic digestion technologies been applied in this field for years, however both process have difficulties been selected by economical and environmental factors. Beside environmental pollution and risk of virus spread, the compost is not a product been welcomed by people even the waste management has to give up them at no cost. The anaerobic digester has to have 70% of water and keep at 35 degree C or above; base on above conditions, the retention time only can be up to two weeks and remain solid has to be dewater and composting again. The enhancive waste water treatment has to be added after. Because these reasons, the voice of suggesting cancelling recycling program and turning all waste to mass burn incinerations have been raised-A process has already been proved has least energy efficiency and most air pollution problem associated process. A newly developed WXF Bio-energy process employs recently developed and patented pre-designed separation, multi-layer and multi-cavity successive bioreactor landfill technology. It features an improved leachate recycling technology, technologies to maximize the biogas generation rate and a reduced overall turnaround period on the land. A single properly designed and operated site can be used indefinitely. In this process, all collected biogas will be processed to eliminate H2S and other hazardous gases. The methane, carbon dioxide and hydrogen will be utilized in a proprietary process to manufacture methanol which can be sold to mitigate operating costs of the landfill. This integration of new processes offers a more advanced alternative to current sanitary landfill, incineration and compost technology. Xu Fei (Philip) Wu Xu Fei Wu is founder and Chief Scientist of W&Y Environmental International Inc. (W & Y), a Canadian environmental and sustainable energy technology company with patented landfill processes and proprietary waste to energy technologies. He has worked in environmental and sustainable energy fields over the last 25 years. Before W&Y, he worked for Conestoga-Rovers & Associates Limited, Microbe Environmental Science and Technology Inc. of Canada and The Ministry of Nuclear Industry and Ministry of Space Flight Industry of China. Xu Fei Wu holds a Master of Engineering Science degree from The University of Western Ontario. I wish present this paper as an oral presentation only Selected Conference Presentations: • “Removal of Phenolic Compounds with Algae” Presented at 25th Canadian Symposium on Water Pollution Research (CAWPRC Conference), Burlington, Ontario Canada. February, 1990 • “Removal of Phenolic Compounds with Algae” Presented at Annual Conference of Pollution Control Association of Ontario, London, Ontario, Canada. April, 1990 • “Removal of Organochlorine Compounds in a Flocculated Algae Photo-Bioreactor” Presented at International Symposium on Low Cost and Energy Saving Wastewater Treatment Technologies (IAWPRC Conference), Kiyoto, Japan, August, 1990 • “Maximizing Production and Utilization of Landfill Gas” 2009 Wuhan International Conference on Environment(CAWPRC Conference, sponsored by US EPA) Wuhan, China. October, 2009. • “WXF Bio-Energy-A Green, Sustainable Waste to Energy Process” Presented at 9Th International Conference Cooperation for Waste Issues, Kharkiv, Ukraine March, 2012 • “A Lannfill Site Can Be Recycled Indefinitely” Presented at 28th International Conference on solid Waste Technology and Management, Philadelphia, Pennsylvania, USA. March, 2013. Hosted by The Journal of Solid Waste Technology and Management.Keywords: green fuel, waste management, bio-energy, sustainable development, methanol
Procedia PDF Downloads 2799 Blockchain Based Hydrogen Market (BBH₂): A Paradigm-Shifting Innovative Solution for Climate-Friendly and Sustainable Structural Change
Authors: Volker Wannack
Abstract:
Regional, national, and international strategies focusing on hydrogen (H₂) and blockchain are driving significant advancements in hydrogen and blockchain technology worldwide. These strategies lay the foundation for the groundbreaking "Blockchain Based Hydrogen Market (BBH₂)" project. The primary goal of this project is to develop a functional Blockchain Minimum Viable Product (B-MVP) for the hydrogen market. The B-MVP will leverage blockchain as an enabling technology with a common database and platform, facilitating secure and automated transactions through smart contracts. This innovation will revolutionize logistics, trading, and transactions within the hydrogen market. The B-MVP has transformative potential across various sectors. It benefits renewable energy producers, surplus energy-based hydrogen producers, hydrogen transport and distribution grid operators, and hydrogen consumers. By implementing standardized, automated, and tamper-proof processes, the B-MVP enhances cost efficiency and enables transparent and traceable transactions. Its key objective is to establish the verifiable integrity of climate-friendly "green" hydrogen by tracing its supply chain from renewable energy producers to end users. This emphasis on transparency and accountability promotes economic, ecological, and social sustainability while fostering a secure and transparent market environment. A notable feature of the B-MVP is its cross-border operability, eliminating the need for country-specific data storage and expanding its global applicability. This flexibility not only broadens its reach but also creates opportunities for long-term job creation through the establishment of a dedicated blockchain operating company. By attracting skilled workers and supporting their training, the B-MVP strengthens the workforce in the growing hydrogen sector. Moreover, it drives the emergence of innovative business models that attract additional company establishments and startups and contributes to long-term job creation. For instance, data evaluation can be utilized to develop customized tariffs and provide demand-oriented network capacities to producers and network operators, benefitting redistributors and end customers with tamper-proof pricing options. The B-MVP not only brings technological and economic advancements but also enhances the visibility of national and international standard-setting efforts. Regions implementing the B-MVP become pioneers in climate-friendly, sustainable, and forward-thinking practices, generating interest beyond their geographic boundaries. Additionally, the B-MVP serves as a catalyst for research and development, facilitating knowledge transfer between universities and companies. This collaborative environment fosters scientific progress, aligns with strategic innovation management, and cultivates an innovation culture within the hydrogen market. Through the integration of blockchain and hydrogen technologies, the B-MVP promotes holistic innovation and contributes to a sustainable future in the hydrogen industry. The implementation process involves evaluating and mapping suitable blockchain technology and architecture, developing and implementing the blockchain, smart contracts, and depositing certificates of origin. It also includes creating interfaces to existing systems such as nomination, portfolio management, trading, and billing systems, testing the scalability of the B-MVP to other markets and user groups, developing data formats for process-relevant data exchange, and conducting field studies to validate the B-MVP. BBH₂ is part of the "Technology Offensive Hydrogen" funding call within the research funding of the Federal Ministry of Economics and Climate Protection in the 7th Energy Research Programme of the Federal Government.Keywords: hydrogen, blockchain, sustainability, innovation, structural change
Procedia PDF Downloads 1728 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 1507 Employee Engagement
Authors: Jai Bakliya, Palak Dhamecha
Abstract:
Today customer satisfaction is given utmost priority be it any industry. But when it comes to hospitality industry this applies even more as they come in direct contact with customers while providing them services. Employee engagement is new concept adopted by Human Resource Department which impacts customer satisfactions. To satisfy your customers, it is necessary to see that the employees in the organisation are satisfied and engaged enough in their work that they meet the company’s expectations and contribute in the process of achieving company’s goals and objectives. After all employees is human capital of the organisation. Employee engagement has become a top business priority for every organisation. In this fast moving economy, business leaders know that having a potential and high-performing human resource is important for growth and survival. They recognize that a highly engaged manpower can increase innovation, productivity, and performance, while reducing costs related to retention and hiring in highly competitive talent markets. But while most executives see a clear need to improve employee engagement, many have yet to develop tangible ways to measure and tackle this goal. Employee Engagement is an approach which is applied to establish an emotional connection between an employee and the organisation which ensures the employee’s commitment towards his work which affects the productivity and overall performance of the organisation. The study was conducted in hospitality industry. A popular branded hotel was chosen as a sample unit. Data were collected, both qualitative and quantitative from respondents. It is found that employee engagement level of the organisation (Hotel) is quite low. This means that employees are not emotionally connected with the organisation which may in turn, affect performance of the employees it is important to note that in hospitality industry individual employee’s performance specifically in terms of emotional engagement is critical and, therefore, a low engagement level may contribute to low organisation performance. An attempt to this study was made to identify employee engagement level. Another objective to take this study was to explore the factors impeding employee engagement and to explore employee engagement facilitation. While in the hospitality industry where people tend to work for as long as 16 to 18 hours concepts like employee engagement is essential. Because employees get tired of their routine job and in case where job rotation cannot be done employee engagement acts as a solution. The study was conducted at Trident Hotel, Udaipur. It was conducted on the sample size of 30 in-house employees from 6 different departments. The various departments were: Accounts and General, Front Office, Food & Beverage Service, Housekeeping, Food & Beverage Production and Engineering. It was conducted with the help of research instrument. The research instrument was Questionnaire. Data collection source was primary source. Trident Udaipur is one of the busiest hotels in Udaipur. The occupancy rate of the guest over there is nearly 80%. Due the high occupancy rate employees or staff of the hotel used to remain very busy and occupied all the time in their work. They worked for their remuneration only. As a result, they do not have any encouragement for their work nor they are interested in going an extra mile for the organisation. The study result shows working environment factors including recognition and appreciation, opinions of the employee, counselling, feedback from superiors, treatment of managers and respect from the organisation are capable of increasing employee engagement level in the hotel. The above study result encouraged us to explore the factors contributed to low employee engagement. It is being found that factors such as recognition and appreciation, feedback from supervisors, opinion of the employee, counselling, feedback from supervisors, treatment from managers has contributed negatively to employee engagement level. Probable reasons for the low contribution are number of employees gave the negative feedback in accordance to the factors stated above of the organisation. It seems that the structure of organisation itself is responsible for the low contribution of employee engagement. The scope of this study is limited to trident hotel situated in the Udaipur. The limitation of the study was that that the results or findings were only based on the responses of respondents of Trident, Udaipur. And so the recommendations were also applicable in Trident, Udaipur and not to all the like organisations across the country. Through the data collected was further analysed, interpreted and concluded. On the basis of the findings, suggestions were provided to the hotel for improvisation.Keywords: human resource, employee engagement, research, study
Procedia PDF Downloads 3086 Characterization of Aluminosilicates and Verification of Their Impact on Quality of Ceramic Proppants Intended for Shale Gas Output
Authors: Joanna Szymanska, Paulina Wawulska-Marek, Jaroslaw Mizera
Abstract:
Nowadays, the rapid growth of global energy consumption and uncontrolled depletion of natural resources become a serious problem. Shale rocks are the largest and potential global basins containing hydrocarbons, trapped in closed pores of the shale matrix. Regardless of the shales origin, mining conditions are extremely unfavourable due to high reservoir pressure, great depths, increased clay minerals content and limited permeability (nanoDarcy) of the rocks. Taking into consideration such geomechanical barriers, effective extraction of natural gas from shales with plastic zones demands effective operations. Actually, hydraulic fracturing is the most developed technique based on the injection of pressurized fluid into a wellbore, to initiate fractures propagation. However, a rapid drop of pressure after fluid suction to the ground induces a fracture closure and conductivity reduction. In order to minimize this risk, proppants should be applied. They are solid granules transported with hydraulic fluids to locate inside the rock. Proppants act as a prop for the closing fracture, thus gas migration to a borehole is effective. Quartz sands are commonly applied proppants only at shallow deposits (USA). Whereas, ceramic proppants are designed to meet rigorous downhole conditions to intensify output. Ceramic granules predominate with higher mechanical strength, stability in strong acidic environment, spherical shape and homogeneity as well. Quality of ceramic proppants is conditioned by raw materials selection. Aim of this study was to obtain the proppants from aluminosilicates (the kaolinite subgroup) and mix of minerals with a high alumina content. These loamy minerals contain a tubular and platy morphology that improves mechanical properties and reduces their specific weight. Moreover, they are distinguished by well-developed surface area, high porosity, fine particle size, superb dispersion and nontoxic properties - very crucial for particles consolidation into spherical and crush-resistant granules in mechanical granulation process. The aluminosilicates were mixed with water and natural organic binder to improve liquid-bridges and pores formation between particles. Afterward, the green proppants were subjected to sintering at high temperatures. Evaluation of the minerals utility was based on their particle size distribution (laser diffraction study) and thermal stability (thermogravimetry). Scanning Electron Microscopy was useful for morphology and shape identification combined with specific surface area measurement (BET). Chemical composition was verified by Energy Dispersive Spectroscopy and X-ray Fluorescence. Moreover, bulk density and specific weight were measured. Such comprehensive characterization of loamy materials confirmed their favourable impact on the proppants granulation. The sintered granules were analyzed by SEM to verify the surface topography and phase transitions after sintering. Pores distribution was identified by X-Ray Tomography. This method enabled also the simulation of proppants settlement in a fracture, while measurement of bulk density was essential to predict their amount to fill a well. Roundness coefficient was also evaluated, whereas impact on mining environment was identified by turbidity and solubility in acid - to indicate risk of the material decay in a well. The obtained outcomes confirmed a positive influence of the loamy minerals on ceramic proppants properties with respect to the strict norms. This research is perspective for higher quality proppants production with costs reduction.Keywords: aluminosilicates, ceramic proppants, mechanical granulation, shale gas
Procedia PDF Downloads 1635 Light Sensitive Plasmonic Nanostructures for Photonic Applications
Authors: Istvan Csarnovics, Attila Bonyar, Miklos Veres, Laszlo Himics, Attila Csik, Judit Kaman, Julia Burunkova, Geza Szanto, Laszlo Balazs, Sandor Kokenyesi
Abstract:
In this work, the performance of gold nanoparticles were investigated for stimulation of photosensitive materials for photonic applications. It was widely used for surface plasmon resonance experiments, not in the last place because of the manifestation of optical resonances in the visible spectral region. The localized surface plasmon resonance is rather easily observed in nanometer-sized metallic structures and widely used for measurements, sensing, in semiconductor devices and even in optical data storage. Firstly, gold nanoparticles on silica glass substrate satisfy the conditions for surface plasmon resonance in the green-red spectral range, where the chalcogenide glasses have the highest sensitivity. The gold nanostructures influence and enhance the optical, structural and volume changes and promote the exciton generation in gold nanoparticles/chalcogenide layer structure. The experimental results support the importance of localized electric fields in the photo-induced transformation of chalcogenide glasses as well as suggest new approaches to improve the performance of these optical recording media. Results may be utilized for direct, micrometre- or submicron size geometrical and optical pattern formation and used also for further development of the explanations of these effects in chalcogenide glasses. Besides of that, gold nanoparticles could be added to the organic light-sensitive material. The acrylate-based materials are frequently used for optical, holographic recording of optoelectronic elements due to photo-stimulated structural transformations. The holographic recording process and photo-polymerization effect could be enhanced by the localized plasmon field of the created gold nanostructures. Finally, gold nanoparticles widely used for electrochemical and optical sensor applications. Although these NPs can be synthesized in several ways, perhaps one of the simplest methods is the thermal annealing of pre-deposited thin films on glass or silicon surfaces. With this method, the parameters of the annealing process (time, temperature) and the pre-deposited thin film thickness influence and define the resulting size and distribution of the NPs on the surface. Localized surface plasmon resonance (LSPR) is a very sensitive optical phenomenon and can be utilized for a large variety of sensing purposes (chemical sensors, gas sensors, biosensors, etc.). Surface-enhanced Raman spectroscopy (SERS) is an analytical method which can significantly increase the yield of Raman scattering of target molecules adsorbed on the surface of metallic nanoparticles. The sensitivity of LSPR and SERS based devices is strongly depending on the used material and also on the size and geometry of the metallic nanoparticles. By controlling these parameters the plasmon absorption band can be tuned and the sensitivity can be optimized. The technological parameters of the generated gold nanoparticles were investigated and influence on the SERS and on the LSPR sensitivity was established. The LSPR sensitivity were simulated for gold nanocubes and nanospheres with MNPBEM Matlab toolbox. It was found that the enhancement factor (which characterize the increase in the peak shift for multi-particle arrangements compared to single-particle models) depends on the size of the nanoparticles and on the distance between the particles. This work was supported by GINOP- 2.3.2-15-2016-00041 project, which is co-financed by the European Union and European Social Fund. Istvan Csarnovics is grateful for the support through the New National Excellence Program of the Ministry of Human Capacities, supported by the ÚNKP-17-4 Attila Bonyár and Miklós Veres are grateful for the support of the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.Keywords: light sensitive nanocomposites, metallic nanoparticles, photonic application, plasmonic nanostructures
Procedia PDF Downloads 3064 Recent Developments in E-waste Management in India
Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay, Ananya Mukhopadhyay, Harendra Nath Bhattacharya
Abstract:
This study investigates the global issue of electronic waste (e-waste), focusing on its prevalence in India and other regions. E-waste has emerged as a significant worldwide problem, with India contributing a substantial share of annual e-waste generation. The primary sources of e-waste in India are computer equipment and mobile phones. Many developed nations utilize India as a dumping ground for their e-waste, with major contributions from the United States, China, Europe, Taiwan, South Korea, and Japan. The study identifies Maharashtra, Tamil Nadu, Mumbai, and Delhi as prominent contributors to India's e-waste crisis. This issue is contextualized within the broader framework of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals (SDGs) and 169 associated targets to address poverty, environmental preservation, and universal prosperity. The study underscores the interconnectedness of e-waste management with several SDGs, including health, clean water, economic growth, sustainable cities, responsible consumption, and ocean conservation. Central Pollution Control Board (CPCB) data reveals that e-waste generation surpasses that of plastic waste, increasing annually at a rate of 31%. However, only 20% of electronic waste is recycled through organized and regulated methods in underdeveloped nations. In Europe, efficient e-waste management stands at just 35%. E-waste pollution poses serious threats to soil, groundwater, and public health due to toxic components such as mercury, lead, bromine, and arsenic. Long-term exposure to these toxins, notably arsenic in microchips, has been linked to severe health issues, including cancer, neurological damage, and skin disorders. Lead exposure, particularly concerning for children, can result in brain damage, kidney problems, and blood disorders. The study highlights the problematic transboundary movement of e-waste, with approximately 352,474 metric tonnes of electronic waste illegally shipped from Europe to developing nations annually, mainly to Africa, including Nigeria, Ghana, and Tanzania. Effective e-waste management, underpinned by appropriate infrastructure, regulations, and policies, offers opportunities for job creation and aligns with the objectives of the 2030 Agenda for SDGs, especially in the realms of decent work, economic growth, and responsible production and consumption. E-waste represents hazardous pollutants and valuable secondary resources, making it a focal point for anthropogenic resource exploitation. The United Nations estimates that e-waste holds potential secondary raw materials worth around 55 billion Euros. The study also identifies numerous challenges in e-waste management, encompassing the sheer volume of e-waste, child labor, inadequate legislation, insufficient infrastructure, health concerns, lack of incentive schemes, limited awareness, e-waste imports, high costs associated with recycling plant establishment, and more. To mitigate these issues, the study offers several solutions, such as providing tax incentives for scrap dealers, implementing reward and reprimand systems for e-waste management compliance, offering training on e-waste handling, promoting responsible e-waste disposal, advancing recycling technologies, regulating e-waste imports, and ensuring the safe disposal of domestic e-waste. A mechanism, Buy-Back programs, will compensate customers in cash when they deposit unwanted digital products. This E-waste could contain any portable electronic device, such as cell phones, computers, tablets, etc. Addressing the e-waste predicament necessitates a multi-faceted approach involving government regulations, industry initiatives, public awareness campaigns, and international cooperation to minimize environmental and health repercussions while harnessing the economic potential of recycling and responsible management.Keywords: e-waste management, sustainable development goal, e-waste disposal, recycling technology, buy-back policy
Procedia PDF Downloads 883 Effect of Inoculation with Consortia of Plant-Growth Promoting Bacteria on Biomass Production of the Halophyte Salicornia ramosissima
Authors: Maria João Ferreira, Natalia Sierra-Garcia, Javier Cremades, Carla António, Ana M. Rodrigues, Helena Silva, Ângela Cunha
Abstract:
Salicornia ramosissima, a halophyte that grows naturally in coastal areas of the northern hemisphere, is often considered the most promising halophyte candidate for extensive crop cultivation and saline agriculture practices. The expanding interest in this plant surpasses its use as gourmet food and includes their potential application as a source of bioactive compounds for the pharmaceutical industry. Despite growing well in saline soils, sustainable and ecologically friendly techniques to enhance crop production and the nutritional value of this plant are still needed. The root microbiome of S. ramosissima proved to be a source of taxonomically diverse plant growth-promoting bacteria (PGPB). Halotolerant strains of Bacillus, Salinicola, Pseudomonas, and Brevibacterium, among other genera, exhibit a broad spectrum of plant-growth promotion traits [e.g., 3-indole acetic acid (IAA), 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase, siderophores, phosphate solubilization, Nitrogen fixation] and express a wide range of extracellular enzyme activities. In this work, three plant growth-promoting bacteria strains (Brevibacterium casei EB3, Pseudomonas oryzihabitans RL18, and Bacillus aryabhattai SP20) isolated from the rhizosphere and the endosphere of S. ramosissima roots from different saltmarshes along the Portuguese coast were inoculated in S. ramosissima seeds. Plants germinated from inoculated seeds were grown for three months in pots filled with a mixture of perlite and estuarine sediment (1:1) in greenhouse conditions and later transferred to a growth chamber, where they were maintained two months with controlled photoperiod, temperature, and humidity. Pots were placed on trays containing the irrigation solution (Hoagland’s solution 20% added with 10‰ marine salt). Before reaching the flowering stage, plants were collected, and the fresh and dry weight of aerial parts was determined. Non-inoculated seeds were used as a negative control. Selected dried stems from the most promising treatments were later analyzed by GC-TOF-MS for primary metabolite composition. The efficiency of inoculation and persistence of the inoculum was assessed by Next Generation Sequencing. Inoculations with single strain EB3 and co-inoculations with EB3+RL18 and EB3+RL18+SP20 (All treatment) resulted in significantly higher biomass production (fresh and dry weight) compared to non-inoculated plants. Considering fresh weight alone, inoculation with isolates SP20 and RL18 also caused a significant positive effect. Combined inoculation with the consortia SP20+EB3 or SP20+RL18 did not significantly improve biomass production. The analysis of the profile of primary metabolites will provide clues on the mechanisms by which the growth-enhancement effect of the inoculants operates in the plants. These results sustain promising prospects for the use of rhizospheric and endophytic PGPB as biofertilizers, reducing environmental impacts and operational costs of agrochemicals and contributing to the sustainability and cost-effectiveness of saline agriculture. Acknowledgments: This work was supported by project Rhizomis PTDC/BIA-MIC/29736/2017 financed by Fundação para a Ciência e Tecnologia (FCT) through the Regional Operational Program of the Center (02/SAICT/2017) with FEDER funds (European Regional Development Fund, FNR, and OE) and by FCT through CESAM (UIDP/50017/2020 + UIDB/50017/2020), LAQV-REQUIMTE (UIDB/50006/2020). We also acknowledge FCT/FSE for the financial support to Maria João Ferreira through a PhD grant (PD/BD/150363/2019). We are grateful to Horta dos Peixinhos for their help and support during sampling and seed collection. We also thank Glória Pinto for her collaboration providing us the use of the growth chambers during the final months of the experiment and Enrique Mateos-Naranjo and Jennifer Mesa-Marín of the Departamento de Biología Vegetal y Ecología, the University of Sevilla for their advice regarding the growth of salicornia plants in greenhouse conditions.Keywords: halophytes, PGPB, rhizosphere engineering, biofertilizers, primary metabolite profiling, plant inoculation, Salicornia ramosissima
Procedia PDF Downloads 1602 Glycyrrhizic Acid Inhibits Lipopolysaccharide-Stimulated Bovine Fibroblast-Like Synoviocyte, Invasion through Suppression of TLR4/NF-κB-Mediated Matrix Metalloproteinase-9 Expression
Authors: Hosein Maghsoudi
Abstract:
Rheumatois arthritis (RA) is progressive inflammatory autoimmune diseases that primarily affect the joints, characterized by synovial hyperplasia and inflammatory cell infiltration, deformed and painful joints, which can lead tissue destruction, functional disability systemic complications, and early dead and socioeconomic costs. The cause of rheumatoid arthritis is unknown, but genetic and environmental factors are contributory and the prognosis is guarded. However, advances in understanding the pathogenesis of the disease have fostered the development of new therapeutics, with improved outcomes. The current treatment strategy, which reflects this progress, is to initiate aggressive therapy soon after diagnosis and to escalate the therapy, guided by an assessment of disease activity, in pursuit of clinical remission. The pathobiology of RA is multifaceted and involves T cells, B cells, fibroblast-like synoviocyte (FLSc) and the complex interaction of many pro-inflammatory cytokine. Novel biologic agents that target tumor necrosis or interlukin (IL)-1 and Il-6, in addition T- and B-cells inhibitors, have resulted in favorable clinical outcomes in patients with RA. Despite this, at least 30% of RA patients are résistance to available therapies, suggesting novel mediators should be identified that can target other disease-specific pathway or cell lineage. Among the inflammatory cell population that might participated in RA pathogenesis, FLSc are crucial in initiaing and driving RA in concert of cartilage and bone by secreting metalloproteinase (MMPs) into the synovial fluid and by direct invasion into extracellular matrix (ECM), further exacerbating joint damage. Invasion of fibroblast-like synoviocytes (FLSc) is critical in the pathogenesis of rheumatoid-arthritis. The metalloproteinase (MMPs) and activator of Toll-like receptor 4 (TLR4)/nuclear factor- κB pthway play a critical role in RA-FLS invasion induced by lipopolysaccharide (LPS). The present study aimed to explore the anti-invasion activity of Glycyrrhizic Acid as a pharmacologically safe phytochemical agent with potent anti-inflammatory properties on IL-1beta and TNF-alpha signalling pathways in Bovine fibroblast-like synoviocyte ex- vitro, on LPS-stimulated bovine FLS migration and invasion as well as MMP expression and explored the upstream signal transduction. Results showed that Glycyrrhizic Acid suppressed LPS-stimulated bovine FLS migration and invasion by inhibition MMP-9 expression and activity. In addition our results revealed that Glycyrrhizic Acid inhibited the transcriptional activity of MMP-9 by suppression the nbinding activity of NF- κB in the MMP-9 promoter pathway. The extract of licorice (Glycyrrhiza glabra L.) has been widely used for many centuries in the traditional Chinese medicine as native anti-allergic agent. Glycyrrhizin (GL), a triterpenoidsaponin, extracted from the roots of licorice is the most effective compound for inflammation and allergic diseases in human body. The biological and pharmacological studies revealed that GL possesses many pharmacological effects, such as anti-inflammatory, anti-viral and liver protective effects, and the biological effects, such as induction of cytokines (interferon-γ and IL-12), chemokines as well as extrathymic T and anti-type 2 T cells. GL is known in the traditional Chinese medicine for its anti-inflammatory effect, which is originally described by Finney in 1959. The mechanism of the GL-induced anti-inflammatory effect is based on different pathways of the GL-induced selective inhibition of the prostaglandin E2 production, the CK-II- mediated activation of both GL-binding lipoxygenas (gbLOX; 17) and PLA2, an anti-thrombin action of GL and production of the reactive oxygen species (ROS; GL exerts liver protection properties by inhibiting PLA2 or by the hydroxyl radical trapping action, leading to the lowering of serum alanine and aspartate transaminase levels. The present study was undertaken to examine the possible mechanism of anti-inflammatory properties GL on IL-1beta and TNF-alpha signalling pathways in bovine fibroblast-like synoviocyte ex-vivo, on LPS-stimulated bovine FLS migration and invasion as well as MMP expression and explored the upstream signal transduction. Our results clearly showed that treatment of bovine fibroblast-like synoviocyte with GL suppressed LPS-induced cell migration and invasion. Furthermore, it revealed that GL inhibited the transcription activity of MMP-9 by suppressing the binding activity of NF-κB in the MM-9 promoter. MMP-9 is an important ECM-degrading enzyme and overexpression of MMPs in important of RA-FLSs. LPS can stimulate bovine FLS to secret MMPs, and this induction is regulated at the transcription and translational levels. In this study, LPS treatment of bovine FLS caused an increase in MMP-2 and MMP-9 levels. The increase in MMP-9 expression and secretion was inhibited by ex- vitro. Furthermore, these effects were mimicked by MMP-9 siRNA. These result therefore indicate the the inhibition of LPS-induced bovine FLS invasion by GL occurs primarily by inhibiting MMP-9 expression and activity. Next we analyzed the functional significance of NF-κB transcription of MMP-9 activation in Bovine FLSs. Results from EMSA showed that GL suppressed LPS-induced NF-κB binding to the MMP-9 promotor, as NF-κB regulates transcriptional activation of multiple inflammatory cytokines, we predicted that GL might target NF-κB to suppress MMP-9 transcription by LPS. Myeloid differentiation-factor 88 (MyD88) and TIR-domain containing adaptor protein (TIRAP) are critical proteins in the LPS-induced NF-κB and apoptotic signaling pathways, GL inhibited the expression of TLR4 and MYD88. These results demonstrated that GL suppress LPS-induced MMP-9 expression through the inhibition of the induced TLR4/NFκB signaling pathway. Taken together, our results provide evidence that GL exerts anti-inflammatory effects by inhibition LPS-induced bovine FLSs migration and invasion, and the mechanisms may involve the suppression of TLR4/NFκB –mediated MMP-9 expression. Although further work is needed to clarify the complicated mechanism of GL-induced anti-invasion of bovine FLSs, GL might be used as a further anti-invasion drug with therapeutic efficacy in the treatment of immune-mediated inflammatory disease such as RA.Keywords: glycyrrhizic acid, bovine fibroblast-like synoviocyte, tlr4/nf-κb, metalloproteinase-9
Procedia PDF Downloads 3911 Developing VR-Based Neurorehabilitation Support Tools: A Step-by-Step Approach for Cognitive Rehabilitation and Pain Distraction during Invasive Techniques in Hospital Settings
Authors: Alba Prats-Bisbe, Jaume López-Carballo, David Leno-Colorado, Alberto García Molina, Alicia Romero Marquez, Elena Hernández Pena, Eloy Opisso Salleras, Raimon Jané Campos
Abstract:
Neurological disorders are a leading cause of disability and premature mortality worldwide. Neurorehabilitation (NRHB) is a clinical process aimed at reducing functional impairment, promoting societal participation, and improving the quality of life for affected individuals. Virtual reality (VR) technology is emerging as a promising NRHB support tool. Its immersive nature fosters a strong sense of agency and embodiment, motivating patients to engage in meaningful tasks and increasing adherence to therapy. However, the clinical benefits of VR interventions are challenging to determine due to the high heterogeneity among health applications. This study explores a stepwise development approach for creating VR-based tools to assist individuals with neurological disorders in medical practice, aiming to enhance reproducibility, facilitate comparison, and promote the generalization of findings. Building on previous research, the step-by-step methodology encompasses: Needs Identification– conducting cross-disciplinary meetings to brainstorm problems, solutions, and address barriers. Intervention Definition– target population, set goals, and conceptualize the VR system (equipment and environments). Material Selection and Placement– choose appropriate hardware and software, place the device within the hospital setting, and test equipment. Co-design– collaboratively create VR environments, user interfaces, and data management strategies. Prototyping– develop VR prototypes, conduct user testing, and make iterative redesigns. Usability and Feasibility Assessment– design protocols and conduct trials with stakeholders in the hospital setting. Efficacy Assessment– conduct clinical trials to evaluate outcomes and long-term effects. Cost-Effectiveness Validation– assess reproducibility, sustainability, and balance between costs and benefits. NRHB is complex due to the multifaceted needs of patients and the interdisciplinary healthcare architecture. VR has the potential to support various applications, such as motor skill training, cognitive tasks, pain management, unilateral spatial neglect (diagnosis and treatment), mirror therapy, and ecologically valid activities of daily living. Following this methodology was crucial for launching a VR-based system in a real hospital environment. Collaboration with neuropsychologists lead to develop A) a VR-based tool for cognitive rehabilitation in patients with acquired brain injury (ABI). The system comprises a head-mounted display (HTC Vive Pro Eye) and 7 tasks targeting attention, memory, and executive functions. A desktop application facilitates session configuration, while database records in-game variables. The VR tool's usability and feasibility were demonstrated in proof-of-concept trials with 20 patients, and effectiveness is being tested through a clinical protocol with 12 patients completing 24-session treatment. Another case involved collaboration with nurses and paediatric physiatrists to create B) a VR-based distraction tool during invasive techniques. The goal is to alleviate pain and anxiety associated with botulinum toxin (BTX) injections, blood tests, or intravenous placements. An all-in-one headset (HTC Vive Focus 3) deploys 360º videos to improve the experience for paediatric patients and their families. This study presents a framework for developing clinically relevant and technologically feasible VR-based support tools for hospital settings. Despite differences in patient type, intervention purpose, and VR system, the methodology demonstrates usability, viability, reproducibility and preliminary clinical benefits. It highlights the importance approach centred on clinician and patient needs for any aspect of NRHB within a real hospital setting.Keywords: neurological disorders, neurorehabilitation, stepwise development approach, virtual reality
Procedia PDF Downloads 36