Search results for: multiple instance learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11872

Search results for: multiple instance learning

682 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 225
681 Artificial Intelligence in Ethiopian Universities: The Influence of Technological Readiness, Acceptance, Perceived Risk, and Trust on Implementation - An Integrative Research Approach

Authors: Merih Welay Welesilassie

Abstract:

Understanding educators' readiness to incorporate AI tools into their teaching methods requires comprehensively examining the influencing factors. This understanding is crucial, given the potential of these technologies to personalise learning experiences, improve instructional effectiveness, and foster innovative pedagogical approaches. This study evaluated factors affecting teachers' adoption of AI tools in their English language instruction by extending the Technology Acceptance Model (TAM) to encompass digital readiness support, perceived risk, and trust. A cross-sectional quantitative survey was conducted with 128 English language teachers, supplemented by qualitative data collection from 15 English teachers. The structural mode analysis indicated that implementing AI tools in Ethiopian higher education was notably influenced by digital readiness support, perceived ease of use, perceived usefulness, perceived risk, and trust. Digital readiness support positively impacted perceived ease of use, usefulness, and trust while reducing safety and privacy risks. Perceived ease of use positively correlated with perceived usefulness but negatively influenced trust. Furthermore, perceived usefulness strengthened trust in AI tools, while perceived safety and privacy risks significantly undermined trust. Trust was crucial in increasing educators' willingness to adopt AI technologies. The qualitative analysis revealed that the teachers exhibited strong content and pedagogical knowledge but needed more technology-related knowledge. Moreover, It was found that the teachers did not utilise digital tools to teach English. The study identified several obstacles to incorporating digital tools into English lessons, such as insufficient digital infrastructure, a shortage of educational resources, inadequate professional development opportunities, and challenging policies and governance. The findings provide valuable guidance for educators, inform policymakers about creating supportive digital environments, and offer a foundation for further investigation into technology adoption in educational settings in Ethiopia and similar contexts.

Keywords: digital readiness support, AI acceptance, risk, trust

Procedia PDF Downloads 15
680 Emergency Multidisciplinary Continuing Care Case Management

Authors: Mekroud Amel

Abstract:

Emergency departments are known for the workload, the variety of pathologies and the difficulties in their management with the continuous influx of patients The role of our service in the management of patients with two or three mild to moderate organ failures, involving several disciplines at the same time, as well as the effect of this management on the skills and efficiency of our team has been demonstrated Borderline cases between two or three or even more disciplines, with instability of a vital function, which have been successfully managed in the emergency room, the therapeutic procedures adopted, the consequences on the quality and level of care delivered by our team, as well as that the logistical consequences, and the pedagogical consequences are demonstrated. The consequences found are Positive on the emergency teams, in rare situations are negative Regarding clinical situations, it is the entanglement of hemodynamic distress with right, left or global participation, tamponade, low flow with acute pulmonary edema, and/or state of shock With respiratory distress with more or less profound hypoxemia, with haematosis disorder related to a bacterial or viral lung infection, pleurisy, pneumothorax, bronchoconstrictive crisis. With neurological disorders such as recent stroke, comatose state, or others With metabolic disorders such as hyperkalaemia renal insufficiency severe ionic disorders with accidents with anti vitamin K With or without septate effusion of one or more serous membranes with or without tamponade It’s a Retrospective, monocentric, descriptive study Period 05.01.2022 to 10.31.2022 the purpose of our work: Search for a statistically significant link between the type of moderate to severe pathology managed in the emergency room whose problems are multivisceral on the efficiency of the healthcare team and its level of care and optional care offered for patients Statistical Test used: Chi2 test to prove the significant link between the resolution of serious multidisciplinary cases in the emergency room and the effectiveness of the team in the management of complicated cases Search for a statistically significant link : The management of the most difficult clinical cases for organ specialties has given general practitioner emergency teams a great perspective and has been able to improve their efficiency in the face of emergencies received

Keywords: emergency care teams, management of patients with dysfunction of more than one organ, learning curve, quality of care

Procedia PDF Downloads 80
679 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore

Authors: Qiao-Yu Warren Cai

Abstract:

Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.

Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans

Procedia PDF Downloads 106
678 False Assumptions Made in Cybersecurity Curriculum: K-12

Authors: Nathaniel Evans, Jessica Boersma, Kenneth Kass

Abstract:

With technology and STEM fields growing every day, there is a significant projected shortfall in qualified cybersecurity workers. As such, it is essential to develop a cybersecurity curriculum that builds skills and cultivates interest in cybersecurity early on. With new jobs being created every day and an already significant gap in the job market, it is vital that educators are pro-active in introducing a cybersecurity curriculum where students are able to learn new skills and engage in an age-appropriate cyber curriculum. Within this growing world of cybersecurity, students should engage in age-appropriate technology and cybersecurity curriculum, starting with elementary school (k-5), extending through high school, and ultimately into college. Such practice will provide students with the confidence, skills, and, ultimately, the opportunity to work in the burgeoning information security field. This paper examines educational methods, pedagogical practices, current cybersecurity curricula, and other educational resources and conducts analysis for false assumptions and developmental appropriateness. It also examines and identifies common mistakes with current cyber curriculum and lessons and discuss strategies for improvement. Throughout the lessons that were reviewed, many common mistakes continued to pop up. These mistakes included age appropriateness, technology resources that were available, and consistency of student’s skill levels. Many of these lessons were written for the wrong grade levels. The ones written for the elementary level all had activities that assumed that every student in the class could read at grade level and also had background knowledge of the cyber activity at hand, which is not always the case. Another major mistake was that these lessons assumed that all schools had any kind of technology resource available to them. Some schools are 1:1, and others are only allotted three computers in their classroom where the students have to share. While coming up with a cyber-curriculum, it has to be kept in mind that not all schools are the same, not every classroom is the same. There are many students who are not reading at their grade level or have not had exposure to the digital world. We need to start slow and ease children into the cyber world. Once they have a better understanding, it will be easier to move forward with these lessons and get the students engaged. With a better understanding of common mistakes that are being made, a more robust curriculum and lessons can be created that no only spark a student’s interest in this much-needed career field but encourage learning while keeping our students safe from cyber-attacks.

Keywords: assumptions, cybersecurity, k-12, teacher

Procedia PDF Downloads 166
677 Recovery of Polyphenolic Phytochemicals From Greek Grape Pomace (Vitis Vinifera L.)

Authors: Christina Drosou, Konstantina E. Kyriakopoulou, Andreas Bimpilas, Dimitrios Tsimogiannis, Magdalini C. Krokida

Abstract:

Rationale: Agiorgitiko is one of the most widely-grown and commercially well-established red wine varieties in Greece. Each year viticulture industry produces a large amount of waste consisting of grape skins and seeds (pomace) during a short period. Grapes contain polyphenolic compounds which are partially transferred to wine during winemaking. Therefore, winery wastes could be an alternative cheap source for obtaining such compounds with important antioxidant activity. Specifically, red grape waste contains anthocyanins and flavonols which are characterized by multiple biological activities, including cardioprotective, anti-inflammatory, anti-carcinogenic, antiviral and antibacterial properties attributed mainly to their antioxidant activity. Ultrasound assisted extraction (UAE) is considered an effective way to recover phenolic compounds, since it combines the advantage of mechanical effect with low temperature. Moreover, green solvents can be used in order to recover extracts intended for used in the food and nutraceutical industry. Apart from the extraction, pre-treatment process like drying can play an important role on the preservation of the grape pomace and the enhancement of its antioxidant capacity. Objective: The aim of this study is to recover natural extracts from winery waste with high antioxidant capacity using green solvents so they can be exploited and utilized as enhancers in food or nutraceuticals. Methods: Agiorgitiko grape pomace was dehydrated by air drying (AD) and accelerated solar drying (ASD) in order to explore the effect of the pre-treatment on the recovery of bioactive compounds. UAE was applied in untreated and dried samples using water and water: ethanol (1:1) as solvents. The total antioxidant potential and phenolic content of the extracts was determined using the 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging assay and Folin-Ciocalteu method, respectively. Finally, the profile of anthocyanins and flavonols was specified using HPLC-DAD analysis. The efficiency of processes was determined in terms of extraction yield, antioxidant activity, phenolic content and the anthocyanins and flavovols profile. Results & Discussion: The experiments indicated that the pre-treatment was essential for the recovery of highly nutritious compounds from the pomace as long as the extracts samples showed higher phenolic content and antioxidant capacity. Water: ethanol (1:1) was considered a more effective solvent on the recovery of phenolic compounds. Moreover, ASD grape pomace extracted with the solvent system exhibited the highest antioxidant activity (IC50=0.36±0.01mg/mL) and phenolic content (TPC=172.68±0.01mgGAE/g dry extract), followed by AD and untreated pomace. The major compounds recovered were malvidin3-O-glucoside and quercetin3-O-glucoside according to the HPLC analysis. Conclusions: Winery waste can be exploited for the recovery of nutritious compounds using green solvents such as water or ethanol. The pretreatment of the pomace can significantly affect the concentration of phenolic compounds, while UAE is considered a highly effective extraction process.

Keywords: agiorgitico grape pomace, antioxidants, phenolic compounds, ultrasound assisted extraction

Procedia PDF Downloads 394
676 Therapeutic Power of Words through Reading Writing and Storytelling

Authors: Sakshi Kaul, Sundeep Verma

Abstract:

The focus of the current paper is to evaluate the therapeutic power of words. This will be done by critically evaluating the impact reading, writing and storytelling have on individuals. When we read, tell or listen to a story we are exercising our imagination. Imagination becomes the source of activation of thoughts and actions. This enables and helps the reader, writer or the listener to express the suppressed emotions or desires. The stories told, untold may bring various human emotions and attributes to forth such as hope, optimism, fear, happiness. Each story narrated evokes different emotions, at times they help us unravel ourselves in the world of the teller thereby bringing solace. Stories heard or told add to individual’s life by creating a community around, giving wings of thoughts that enable individual to be more imaginative and creative thereby fostering positively and happiness. Reading if looked at from the reader’s point of view can broaden the horizon of information and ideas about facts and life laws giving more meaning to life. From ‘once upon a time’ to ‘to happily ever after’, all that stories talk about is life’s learning. The power of words sometimes may be negated, this paper would reiterate the power of words by critically evaluating how words can become powerful and therapeutic in various structures and forms in the society. There is a story behind every situation, action and reaction. Hence it is of prime importance to understand each story, to enable a person to deal with whatever he or she may be going through. For example, if a client is going through some trauma in his or her life, the counsellor needs to know exactly what is the turmoil that is being faced so that the client can be assisted accordingly. Counselling is considered a process of healing through words or as Talk therapy, where merely through words we try to heal the client. In a counselling session, the counsellor focuses on working with the clients to bring a positive change. The counsellor allows the client to express themselves which is referred to as catharsis. The words spoken, written or heard transcend to heal and can be therapeutic. The therapeutic power of words has been seen in various cultural practices and belief systems. The underlining belief that words have the power to heal, save and bring change has existed from ages. Many religious and spiritual practices also acclaim the power of the words. Through this empirical paper, we have tried to bring to light how reading, writing, and storytelling have been used as mediums of healing and have been therapeutic in nature.

Keywords: reading, storytelling, therapeutic, words

Procedia PDF Downloads 269
675 Nursing Experience in Caring for a Patient with Terminal Gastric Cancer and Abdominal Aortic Aneurysm

Authors: Pei-Shan Liang

Abstract:

Objective: This article explores the nursing experience of caring for a patient with terminal gastric cancer complicated by an abdominal aortic aneurysm. The patient experienced physical discomfort due to the disease, initially unable to accept the situation, leading to anxiety, and eventually accepting the need for surgery. Methods: The nursing period was from June 6 to June 10, 2024. Through observation, direct care, conversations, and physical assessments, and using Gordon's eleven functional health patterns for a one-on-one holistic assessment, interdisciplinary team meetings were held with the critical care team and family. Three nursing health issues were identified: pain related to the disease and invasive procedures, anxiety related to uncertainty about disease recovery, and decreased cardiac tissue perfusion related to hemodynamic instability. Results: Open communication techniques and empathetic care were employed to establish a trusting nurse-patient relationship, and patient-centered nursing interventions were developed. Pain was assessed using a 10-point pain scale, and pain medications were adjusted by a pharmacist. Initially, Fentanyl 500mcg with pump run at 1ml/hr was administered, later changed to Ultracet 37.5mg/325mg, 1 tablet every 6 hours orally, reducing the pain score to 3. Lavender aromatherapy and listening to crystal music were used as distractions to alleviate pain, allowing the patient to sleep uninterrupted for at least 7 hours. The patient was encouraged to express feelings and fears through LINE messages or drawings, and a psychologist was invited to provide support. Family members were present at least twice a day for over an hour each time, reducing psychological distress and uncertainty about the prognosis. According to the Beck Anxiety Inventory, the anxiety score dropped from 17 (moderate anxiety) to 6 (no anxiety). Focused nursing care was implemented with close monitoring of vital signs maintaining systolic blood pressure between 112-118 mmHg to ensure adequate myocardial perfusion. The patient was encouraged to get out of bed for postoperative rehabilitation and to strengthen cardiopulmonary function. A chest X-ray showed no abnormalities, and breathing was smooth with Triflow use, maintaining at least 5 seconds with 2 balls four times a day, and SpO2 >96%. Conclusion: The care process highlighted the importance of addressing psychological care in addition to maintaining life when the patient’s condition changes. The presence of family often provided the greatest source of comfort for the patient, helping to reduce anxiety and pain. Nurses must play multiple roles, including advocate, coordinator, educator, and consultant, using various communication techniques and fostering hope by listening to and accepting the patient’s emotional responses. It is hoped that this report will provide a reference for clinical nursing staff and contribute to improving the quality of care.

Keywords: intensive care, gastric cancer, aortic aneurysm, quality of care

Procedia PDF Downloads 24
674 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System

Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar

Abstract:

Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.

Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel

Procedia PDF Downloads 135
673 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports

Authors: Jonardan Koner, Avinash Purandare

Abstract:

In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.

Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders

Procedia PDF Downloads 131
672 Endometrial Biopsy Curettage vs Endometrial Aspiration: Better Modality in Female Genital Tuberculosis

Authors: Rupali Bhatia, Deepthi Nair, Geetika Khanna, Seema Singhal

Abstract:

Introduction: Genital tract tuberculosis is a chronic disease (caused by reactivation of organisms from systemic distribution of Mycobacterium tuberculosis) that often presents with low grade symptoms and non-specific complaints. Patients with genital tuberculosis are usually young women seeking workup and treatment for infertility. Infertility is the commonest presentation due to involvement of the fallopian tubes, endometrium and ovarian damage with poor ovarian volume and reserve. The diagnosis of genital tuberculosis is difficult because of the fact that it is a silent invader of genital tract. Since tissue cannot be obtained from fallopian tubes, the diagnosis is made by isolation of bacilli from endometrial tissue obtained by endometrial biopsy curettage and/or aspiration. Problems are associated with sampling technique as well as diagnostic modality due to lack of adequate sample volumes and the segregation of the sample for various diagnostic tests resulting in non-uniform distribution of microorganisms. Moreover, lack of an efficient sampling technique universally applicable for all specific diagnostic tests contributes to the diagnostic challenges. Endometrial sampling plays a key role in accurate diagnosis of female genital tuberculosis. It may be done by 2 methods viz. endometrial curettage and endometrial aspiration. Both endometrial curettage and aspirate have their own limitations as curettage picks up strip of the endometrium from one of the walls of the uterine cavity including tubal osteal areas whereas aspirate obtains total tissue with exfoliated cells present in the secretory fluid of the endometrial cavity. Further, sparse and uneven distribution of the bacilli remains a major factor contributing to the limitations of the techniques. The sample that is obtained by either technique is subjected to histopathological examination, AFB staining, culture and PCR. Aim: Comparison of the sampling techniques viz. endometrial biopsy curettage and endometrial aspiration using different laboratory methods of histopathology, cytology, microbiology and molecular biology. Method: In a hospital based observational study, 75 Indian females suspected of genital tuberculosis were selected on the basis of inclusion criteria. The women underwent endometrial tissue sampling using Novaks biopsy curette and Karmans cannula. One part of the specimen obtained was sent in formalin solution for histopathological testing and another part was sent in normal saline for acid fast bacilli smear, culture and polymerase chain reaction. The results so obtained were correlated using coefficient of correlation and chi square test. Result: Concordance of results showed moderate agreement between both the sampling techniques. Among HPE, AFB and PCR, maximum sensitivity was observed for PCR, though the specificity was not as high as other techniques. Conclusion: Statistically no significant difference was observed between the results obtained by the two sampling techniques. Therefore, one may use either EA or EB to obtain endometrial samples and avoid multiple sampling as both the techniques are equally efficient in diagnosing genital tuberculosis by HPE, AFB, culture or PCR.

Keywords: acid fast bacilli (AFB), histopatholgy examination (HPE), polymerase chain reaction (PCR), endometrial biopsy curettage

Procedia PDF Downloads 326
671 Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV

Authors: Joyce K. Anastasi, Bernadette Capili

Abstract:

Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV Our research team focuses on developing and testing protocols to manage chronic symptoms. For many years, our team designed and implemented symptom management studies for people living with HIV (PLWH). We identify symptoms that are not curative and are not adequately controlled by conventional therapies. As an exemplar, we describe how we successfully engaged PLWH in developing and refining our research feasibility protocol for distal sensory peripheral neuropathy (DSP) associated with HIV. With input from PLWH with DSP, our research received National Institutes of Health (NIH) research funding support. Significance: DSP is one of the most common neurologic complications in HIV. It is estimated that DSP affects 21% to 50% of PLWH. The pathogenesis of DSP in HIV is complex and unclear. Proposed mechanisms include cytokine dysregulation, viral protein-produced neurotoxicity, and mitochondrial dysfunction associated with antiretroviral medications. There are no FDA-approved treatments for DSP in HIV. Purpose: Aims: 1) to explore the impact of DSP on the lives of PLWH, 2) to identify patients’ perspectives on successful treatments for DSP, 3) to identify interventions considered feasible and sensitive to the needs of PLWH with DSP, and 4) to obtain participant input for protocol/study design. Description of Process: We conducted a needs assessment with PLWH with DSP. From our needs assessment, we learned from the patients’ perspective detailed descriptions of their symptoms; physical functioning with DSP; self-care remedies tried, and desired interventions. We also asked about protocol scheduling, instrument clarity, study compensation, study-related burdens, and willingness to participate in a randomized controlled trial (RCT) with a placebo and a waitlist group. Implications: We incorporated many of the suggestions learned from the need assessment. We developed and completed a feasibility study that provided us with invaluable information that informed subsequent NIH-funded studies. In addition to our extensive clinical and research experience working with PLWH, learning from the patient perspective helped in developing our protocol and promoting a successful plan for recruitment and retention of study participants.

Keywords: clinical trial development, peripheral neuropathy, traditional medicine, HIV, AIDS

Procedia PDF Downloads 84
670 Digital Immunity System for Healthcare Data Security

Authors: Nihar Bheda

Abstract:

Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.

Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology

Procedia PDF Downloads 67
669 Music Responsiveness and Cultural Practice: Tarok Ethnic Group of Plateau State in Focus

Authors: Johnson-Egemba Helen Amaka

Abstract:

Music is emotional in the sense that it controls people’s feelings. The way and manner people react to music at a point in time depend on the type of music that is playing. Music can make someone to march or dance, to cry or laugh, to be happy or sad, to fight or make peace and so on. It therefore makes someone o exhibit some kind of behaviours, either positive or negative. Even dangerous animals have been found to be controlled by music. In the psychiatric homes, mad people are always found to be dancing to music. During funeral ceremony, music singing and dancing are sources of comfort to the bereaved. As a background to the study, Tarok ethnic group in Plateau State was used. The Tarok comprise of Langtang North and South Local Government Areas. The ethnic group of Tarok integrates music in almost all the activities of their lives. A total of six (6) types of folk songs were identified. These songs range from marriages, funeral, royalty, togetherness, war, rituals, festivals, and farming. This paper points out the significance of basic responsiveness of the Tarok people towards the folk songs, their reaction generally whether positive or negative. The methods of data collection employed in this work include oral interview approach, recording of various types of Tarok folk songs, consulting of journals, magazines and textbooks. The researcher used oral interview as her primary source of information which is found to be the most effective procedure in carrying out this task. The songs were textually analyzed with a view to unveiling their meanings, thought processes, and conveying their direction and functions within the context of their rendition. The major findings of the study are that music in Tarok culture covers the physical, mental, emotional and social experiences. The physical aspect is the motor skills, which include dancing and demonstration of the songs. The mental experiences are intellectual levels which include construction and manufacturing of musical instruments, composing songs, teaching and learning etc. Furthermore, this research provided in addition to musical activities, the literature, history and culture of the Tarok communities.

Keywords: cultural, music, practice, responsiveness

Procedia PDF Downloads 296
668 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy

Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather

Abstract:

Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.

Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging

Procedia PDF Downloads 249
667 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database

Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang

Abstract:

For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.

Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree

Procedia PDF Downloads 224
666 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 149
665 A Systematic Review of Antimicrobial Resistance in Fish and Poultry – Health and Environmental Implications for Animal Source Food Production in Egypt, Nigeria, and South Africa

Authors: Ekemini M. Okon, Reuben C. Okocha, Babatunde T. Adesina, Judith O. Ehigie, Babatunde M. Falana, Boluwape T. Okikiola

Abstract:

Antimicrobial resistance (AMR) has evolved to become a significant threat to global public health and food safety. The development of AMR in animals has been associated with antimicrobial overuse. In recent years, the number of antimicrobials used in food animals such as fish and poultry has escalated. It, therefore, becomes imperative to understand the patterns of AMR in fish and poultry and map out future directions for better surveillance efforts. This study used the Preferred Reporting Items for Systematic reviews and Meta-Analyses(PRISMA) to assess the trend, patterns, and spatial distribution for AMR research in Egypt, Nigeria, and South Africa. A literature search was conducted through the Scopus and Web of Science databases in which published studies on AMR between 1989 and 2021 were assessed. A total of 172 articles were relevant for this study. The result showed progressive attention on AMR studies in fish and poultry from 2018 to 2021 across the selected countries. The period between 2018 (23 studies) and 2021 (25 studies) showed a significant increase in AMR publications with a peak in 2019 (28 studies). Egypt was the leading exponent of AMR research (43%, n=74) followed by Nigeria (40%, n=69), then South Africa (17%, n=29). AMR studies in fish received relatively little attention across countries. The majority of the AMR studies were on poultry in Egypt (82%, n=61), Nigeria (87%, n=60), and South Africa (83%, n=24). Further, most of the studies were on Escherichia and Salmonella species. Antimicrobials frequently researched were ampicillin, erythromycin, tetracycline, trimethoprim, chloramphenicol, and sulfamethoxazole groups. Multiple drug resistance was prevalent, as demonstrated by antimicrobial resistance patterns. In poultry, Escherichia coli isolates were resistant to cefotaxime, streptomycin, chloramphenicol, enrofloxacin, gentamycin, ciprofloxacin, oxytetracycline, kanamycin, nalidixic acid, tetracycline, trimethoprim/sulphamethoxazole, erythromycin, and ampicillin. Salmonella enterica serovars were resistant to tetracycline, trimethoprim/sulphamethoxazole, cefotaxime, and ampicillin. Staphylococcusaureus showed high-level resistance to streptomycin, kanamycin, erythromycin, cefoxitin, trimethoprim, vancomycin, ampicillin, and tetracycline. Campylobacter isolates were resistant to ceftriaxone, erythromycin, ciprofloxacin, tetracycline, and nalidixic acid at varying degrees. In fish, Enterococcus isolates showed resistance to penicillin, ampicillin, chloramphenicol, vancomycin, and tetracycline but sensitive to ciprofloxacin, erythromycin, and rifampicin. Isolated strains of Vibrio species showed sensitivity to florfenicol and ciprofloxacin, butresistance to trimethoprim/sulphamethoxazole and erythromycin. Isolates of Aeromonas and Pseudomonas species exhibited resistance to ampicillin and amoxicillin. Specifically, Aeromonashydrophila isolates showed sensitivity to cephradine, doxycycline, erythromycin, and florfenicol. However, resistance was also exhibited against augmentinandtetracycline. The findings constitute public and environmental health threats and suggest the need to promote and advance AMR research in other countries, particularly those on the global hotspot for antimicrobial use.

Keywords: antibiotics, antimicrobial resistance, bacteria, environment, public health

Procedia PDF Downloads 200
664 Pharmacokinetics and Safety of Pacritinib in Patients with Hepatic Impairment and Healthy Volunteers

Authors: Suliman Al-Fayoumi, Sherri Amberg, Huafeng Zhou, Jack W. Singer, James P. Dean

Abstract:

Pacritinib is an oral kinase inhibitor with specificity for JAK2, FLT3, IRAK1, and CSF1R. In clinical studies, pacritinib was well tolerated with clinical activity in patients with myelofibrosis. The most frequent adverse events (AEs) observed with pacritinib are gastrointestinal (diarrhea, nausea, and vomiting; mostly grade 1-2 in severity) and typically resolve within 2 weeks. A human ADME mass balance study demonstrated that pacritinib is predominantly cleared via hepatic metabolism and biliary excretion (>85% of administered dose). The major hepatic metabolite identified, M1, is not thought to materially contribute to the pharmacological activity of pacritinib. Hepatic diseases are known to impair hepatic blood flow, drug-metabolizing enzymes, and biliary transport systems and may affect drug absorption, disposition, efficacy, and toxicity. This phase 1 study evaluated the pharmacokinetics (PK) and safety of pacritinib and the M1 metabolite in study subjects with mild, moderate, or severe hepatic impairment (HI) and matched healthy subjects with normal liver function to determine if pacritinib dosage adjustments are necessary for patients with varying degrees of hepatic insufficiency. Study participants (aged 18-85 y) were enrolled into 4 groups based on their degree of HI as defined by Child-Pugh Clinical Assessment Score: mild (n=8), moderate (n=8), severe (n=4), and healthy volunteers (n=8) matched for age, BMI, and sex. Individuals with concomitant renal dysfunction or progressive liver disease were excluded. A single 400 mg dose of pacritinib was administered to all participants. Blood samples were obtained for PK evaluation predose and at multiple time points postdose through 168 h. Key PK parameters evaluated included maximum plasma concentration (Cmax), time to Cmax (Tmax), area under the plasma concentration time curve (AUC) from hour zero to last measurable concentration (AUC0-t), AUC extrapolated to infinity (AUC0-∞), and apparent terminal elimination half-life (t1/2). Following treatment, pacritinib was quantifiable for all study participants at 1 h through 168 h postdose. Systemic pacritinib exposure was similar between healthy volunteers and individuals with mild HI. However, there was a significant difference between those with moderate and severe HI and healthy volunteers with respect to peak concentration (Cmax) and plasma exposure (AUC0-t, AUC0-∞). Mean Cmax decreased by 47% and 57% respectively in participants with moderate and severe HI vs matched healthy volunteers. Similarly, mean AUC0-t decreased by 36% and 45% and mean AUC0-∞ decreased by 46% and 48%, respectively in individuals with moderate and severe HI vs healthy volunteers. Mean t1/2 ranged from 51.5 to 74.9 h across all groups. The variability on exposure ranged from 17.8% to 51.8% across all groups. Systemic exposure of M1 was also significantly decreased in study participants with moderate or severe HI vs. healthy participants and individuals with mild HI. These changes were not significantly dissimilar from the inter-patient variability in these parameters observed in healthy volunteers. All AEs were grade 1-2 in severity. Diarrhea and headache were the only AEs reported in >1 participant (n=4 each). Based on these observations, it is unlikely that dosage adjustments would be warranted in patients with mild, moderate, or severe HI treated with pacritinib.

Keywords: pacritinib, myelofibrosis, hepatic impairment, pharmacokinetics

Procedia PDF Downloads 299
663 Primary and Secondary Big Bangs Theory of Creation of Universe

Authors: Shyam Sunder Gupta

Abstract:

The current theory for the creation of the universe, the Big Bang theory, is widely accepted but leaves some unanswered questions. It does not explain the origin of the singularity or what causes the Big Bang. The theory of the Big Bang also does not explain why there is such a huge amount of dark energy and dark matter in our universe. Also, there is a question related to one universe or multiple universes which needs to be answered. This research addresses these questions using the Bhagvat Puran and other Vedic scriptures as the basis. There is a Unique Pure Energy Field that is eternal, infinite, and finest of all and never transforms when in its original form. The Carrier Particles of Unique Pure Energy are Param-anus- Fundamental Energy Particles. Param-anus and a combination of these particles create bigger particles from which the Universe gets created. For creation to initiate, Unique Pure Energy is represented in three phases: positive phase energy, neutral phase eternal time energy and negative phase energy. Positive phase energy further expands in three forms of creative energies (CE1, CE2andCE3). From CE1 energy, three energy modes, mode of activation, mode of action, and mode of darkness, were created. From these three modes, 16 Principles, subtlest forms of energies, namely Pradhan, Mahat-tattva, Time, Ego, Intellect, Mind, Sound, Space, Touch, Air, Form, Fire, Taste, Water, Smell, and Earth, get created. In the Mahat-tattva, dominant in the Mode of Darkness, CE1 energy creates innumerable primary singularities from seven principles: Pradhan, Mahat-tattva, Ego, Sky, Air, Fire, and Water. CE1 energy gets divided as CE2 and enters, along with three modes and time, in each singularity, and primary Big Bang takes place, and innumerable Invisible Universes get created. Each Universe has seven coverings of 7 principles, and each layer is 10 times thicker than the previous layer. By energy CE2, space in Invisible Universe under the coverings is divided into two halves. In the lower half, the process of evolution gets initiated, and seeds of 24 elements get created, out of which 5 fundamental elements, building blocks of matter, Sky, Air, Fire, Water and Earth, create seeds of stars, planets, galaxies and all other matter. Since 5 fundamental elements get created out of the mode of darkness, it explains why there is so much dark energy and dark matter in our Universe. This process of creation, in the lower half of Invisible universe continues for 2.16 billion years. Further, in the lower part of the energy field, exactly at the Centre of Invisible Universe, Secondary Singularity is created, through which, by force of Mode of Action, Secondary Big Bang takes place and Visible Universe gets created in the shape of Lotus Flower, expanding into upper part. Visible matter starts appearing after a gap of 360,000 years. Within the Visible Universe, a small part gets created known as the Phenomenal Material World, which is our Solar System, the sun being in the Centre. Diameter of Solar planetary system is 6.4 billion km.

Keywords: invisible universe, phenomenal material world, primary Big Bang, secondary Big Bang, singularities, visible universe

Procedia PDF Downloads 90
662 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project

Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson

Abstract:

The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.

Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction

Procedia PDF Downloads 80
661 Improving the Constructability of Highway Design Plans

Authors: R. Edward Minchin Jr.

Abstract:

The U.S. Federal Highway Administration (FHWA) Every Day Counts Program (EDC) has resulted in state DOTs putting evermore emphasis on speeding up the delivery of highway and bridge construction projects for use by the driving public. This has resulted in an increase in the use of alternative construction delivery systems such as design-build (D-B), construction manager at-risk (CMR) or construction manager/general contractor (CM/GC), and adding alternative technical concepts (ATCs) to traditional design-bid-build (DBB) contracts. ATCs have exhibited great potential for delivering substantial benefits like cost savings, increased constructability, and quicker project delivery. Previous research has found that knowledge of project constructability was lacking in state Department of Transportation (DOT) planning, programming, and environmental staffs. Many agencies have therefore relied on a set of ‘acceptable’ design solutions over the years of working with their local resource agencies. The result is that the permitting process for several government agencies has become increasingly restrictive with the result that the DOTs and their industry partners lose the ability to innovate after a permit is approved. The intent of this paper is to report on the research team’s progress in this ongoing effort furnish the United States government with a uniform set of guidelines for the application of constructability reviews during all phases of project development and delivery. The research uses surveys and interviews to determine which states have implemented formal programs to ensure that the constructor is furnished with a set of contract documents that affords said constructor with the best possible opportunity to successfully construct the project with the highest quality standards, within the contract duration and without exceeding the construction budget. Once these states are identified, workshops are held all over the nation, resulting in the team learning the best current practices and giving the team the ability to recommend new practices that will improve the process. The plan is for the FHWA to encourage or require state DOTs to use these practices on all federally funded highway and bridge construction projects. The project deliverable is a Guidebook for FHWA to use in disseminating the recommended practices to the states.

Keywords: alternative construction delivery, alternative technical concepts, constructability, construction design plans

Procedia PDF Downloads 216
660 Exploring Digital Media’s Impact on Sports Sponsorship: A Global Perspective

Authors: Sylvia Chan-Olmsted, Lisa-Charlotte Wolter

Abstract:

With the continuous proliferation of media platforms, there have been tremendous changes in media consumption behaviors. From the perspective of sports sponsorship, while there is now a multitude of platforms to create brand associations, the changing media landscape and shift of message control also mean that sports sponsors will have to take into account the nature of and consumer responses toward these emerging digital media to devise effective marketing strategies. Utilizing the personal interview methodology, this study is qualitative and exploratory in nature. A total of 18 experts from European and American academics, sports marketing industry, and sports leagues/teams were interviewed to address three main research questions: 1) What are the major changes in digital technologies that are relevant to sports sponsorship; 2) How have digital media influenced the channels and platforms of sports sponsorship; and 3) How have these technologies affected the goals, strategies, and measurement of sports sponsorship. The study found that sports sponsorship has moved from consumer engagement, engagement measurement, and consequences of engagement on brand behaviors to micro-targeting one on one, engagement by context, time, and space, and activation and leveraging based on tracking and databases. From the perspective of platforms and channels, the use of mobile devices is prominent during sports content consumption. Increasing multiscreen media consumption means that sports sponsors need to optimize their investment decisions in leagues, teams, or game-related content sources, as they need to go where the fans are most engaged in. The study observed an imbalanced strategic leveraging of technology and digital infrastructure. While sports leagues have had less emphasis on brand value management via technology, sports sponsors have been much more active in utilizing technologies like mobile/LBS tools, big data/user info, real-time marketing and programmatic, and social media activation. Regardless of the new media/platforms, the study found that integration and contextualization are the two essential means of improving sports sponsorship effectiveness through technology. That is, how sponsors effectively integrate social media/mobile/second screen into their existing legacy media sponsorship plan so technology works for the experience/message instead of distracting fans. Additionally, technological advancement and attention economy amplify the importance of consumer data gathering, but sports consumer data does not mean loyalty or engagement. This study also affirms the benefit of digital media as they offer viral and pre-event activations through storytelling way before the actual event, which is critical for leveraging brand association before and after. That is, sponsors now have multiple opportunities and platforms to tell stories about their brands for longer time period. In summary, digital media facilitate fan experience, access to the brand message, multiplatform/channel presentations, storytelling, and content sharing. Nevertheless, rather than focusing on technology and media, today’s sponsors need to define what they want to focus on in terms of content themes that connect with their brands and then identify the channels/platforms. The big challenge for sponsors is to play to the venues/media’s specificity and its fit with the target audience and not uniformly deliver the same message in the same format on different platforms/channels.

Keywords: digital media, mobile media, social media, technology, sports sponsorship

Procedia PDF Downloads 294
659 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies

Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam

Abstract:

Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.

Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company

Procedia PDF Downloads 264
658 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 68
657 Leveraging Community Partnerships for Social Impact

Authors: T. Moody, E. Mitchell, T. Dang, A. Barry, T. Proshan, S. Andrisse, V. Odero-Marah

Abstract:

Women’s prison and reentry programs are focused primarily on reducing recidivism but neglect how an individual’s intersecting identities influence their risk of violence and ways that histories of gender-based violence (GBV) must be addressed for these women to recover from traumas. Light To Life (LTL) and From Prison Cells to Ph.D. (P2P) Womxn’s Cohort program recognizes this need; providing national gender-responsive programming (GRP), and trauma-informed programming to justice-impacted survivors through digital resources, leadership opportunities, educational workshops, and healing justice approaches for positive health outcomes. Through the support of a community-university partnership (CUP), a comparative evaluation study is being conducted among intimate-partner violence (IPV) survivors with histories of incarceration who have or have not participated in the cohort. The objectives of the partnership are to provide mutually beneficial training and consultation for evaluating GRP through a rigorously tested research methodology. This collaborative applies a rigorous methodology of semi-structured interviews with an intervention and control group to evaluate the impact of LTL’s programming in the P2P Womxn’s Cohort. The CUP is essential to achieve the expected results of the project. It will measure primary outcomes, including participants' level of engagement and satisfaction with programming, reduction in attitudes that accept violence in relationships, and increase in interpersonal and intrapersonal skills that lead to healthy relationships. This community-based approach will provide opportunities to evaluate the effectiveness of the program. The results addressed in the hypothesis will provide learning lessons to improve this program, to scale it up, and apply it to other similarly affected populations. The partnership experience and anticipated outcomes contribute to the knowledge in women’s health and criminal justice by fostering public awareness on the importance of developing new partnerships and fostering CUP to establish a framework to the leveraging of partnerships for social impact available to academic institutions.

Keywords: Community-university partnership, gender-responsive programming, incarceration, intimate-partner violence, POC, women

Procedia PDF Downloads 65
656 Enhancing Academic and Social Skills of Elementary School Students with Autism Spectrum Disorder by an Intensive and Comprehensive Teaching Program

Authors: Piyawan Srisuruk, Janya Boonmeeprasert, Romwarin Gamlunglert, Benjamaporn Choikhruea, Ornjira Jaraepram, Jarin Boonsuchat, Sakdadech Singkibud, Kusalaporn Chaiudomsom, Chanatiporn Chonprai, Pornchanaka Tana, Suchat Paholpak

Abstract:

Objective: To develop an Intensive and comprehensive program (ICP) for the Inclusive Class Teacher (ICPICT) to teach elementary students (ES) with ASD in order to enhance the students’ academic and social skills (ASS) and to study the effect of the teaching program. Methods: The purposive sample included 15 Khon Kaen inclusive class teachers and their 15 elementary students. All the students were diagnosed by a child and adolescent psychiatrist to have DSM-5 level 1 ASD. The study tools included 1) an ICP to teach teachers about ASD, a teaching method to enhance academic and social skills for ES with ASD, and an assessment tool to assess the teacher’s knowledge before and after the ICP. 2) an ICPICT to teach ES with ASD to enhance their ASS. The project taught 10 sessions, 3 hours each. The ICPICT had its teaching structure. Teaching media included: pictures, storytelling, songs, and plays. The authors taught and demonstrated to the participant teachers how to teach with the ICPICT until the participants could display the correct teaching method. Then the teachers taught ICPICT at school by themselves 3) an assessment tool to assess the students’ ASS before and after the completion of the study. The ICP to teach the teachers, the ICPICT, and the relevant assessment tools were developed by the authors and were adjusted until consensus agreed as appropriate for researching by 3 curriculum of teaching children with ASD experts. The data were analyzed by descriptive and analytic statistics via SPSS version 26. Results: After the briefing, the teachers increased the mean score, though not with statistical significance, of knowledge of ASD and how to teach ES with ASD on ASS (p = 0.13). Teaching ES with ASD with the ICPICT could increase the mean scores of the students’ skills in learning and expressing social emotions, relationships with a friend, transitioning, and skills in academic function 3.33, 2.27, 2.94, and 3.00 scores (full scores were 18, 12, 15 and 12, Paired T-Test p = 0.007, 0.013, 0.028 and 0.003 respectively). Conclusion: The program to teach academic and social skills simultaneously in an intensive and comprehensive structure could enhance both the academic and social skills of elementary students with ASD. Keywords: Elementary students, autism spectrum, academic skill, social skills, intensive program, comprehensive program, integration.

Keywords: academica and social skills, students with autism, intensive and comprehensive, teaching program

Procedia PDF Downloads 64
655 Sustainable Urban Growth of Neighborhoods: A Case Study of Alryad-Khartoum

Authors: Zuhal Eltayeb Awad

Abstract:

Alryad neighborhood is located in Khartoum town– the administrative center of the Capital of Sudan. The neighborhood is one of the high-income residential areas with villa type development of low-density. It was planned and developed in 1972 with large plots (600-875m²), wide crossing roads and balanced environment. Recently the area transformed into more compact urban form of high density, mixed-use integrated development with more intensive use of land; multi-storied apartments. The most important socio-economic process in the neighborhood has been the commercialization and deinitialization of the area in connect with the displacement of the residential function. This transformation affected the quality of the neighborhood and the inter-related features of the built environment. A case study approach was chosen to gather the necessary qualitative and quantitative data. A detailed survey on existing development pattern was carried out over the whole area of Alryad. Data on the built and social environment of the neighborhoods were collected through observations, interviews and secondary data sources. The paper reflected a theoretical and empirical interest in the particular characteristics of compact neighborhood with high density, and mixed land uses and their effect on social wellbeing of the residents all in the context of the sustainable development. The research problem is focused on the challenges of transformation that associated with compact neighborhood that created multiple urban problems, e.g., stress of essential services (water supply, electricity, and drainage), congestion of streets and demand for parking. The main objective of the study is to analyze the transformation of this area from residential use to commercial and administrative use. The study analyzed the current situation of the neighborhood compared to the five principles of sustainable neighborhood prepared by UN Habitat. The study found that the neighborhood is experienced changes that occur to inner-city residential areas and the process of change of the neighborhood was originated by external forces due to the declining economic situation of the whole country. It is evident that non-residential uses have taken place uncontrolled, unregulated and haphazardly that led to damage the residential environment and deficiency in infrastructure. The quality of urban life and in particular on levels of privacy was reduced, the neighborhood changed gradually to be a central business district that provides services to the whole Khartoum town. The change of house type may be attributed to a demand-led housing market and absence of policy. The results showed that Alryad is not fully sustainable and self-contained, street network characteristics and mixed land-uses development are compatible with the principles of sustainability. The area of streets represents 27.4% of the total area of the neighborhood. Residential density is 4,620 people/ km², that is lower than the recommendations, and the limited block land-use specialization is higher than 10% of the blocks. Most inhabitants have a high income so that there is no social mix in the neighborhood. The study recommended revision of the current zoning regulations in order to control and regulate undesirable development in the neighborhood and provide new solutions which allow promoting the neighborhood sustainable development.

Keywords: compact neighborhood, land uses, mixed use, residential area, transformation

Procedia PDF Downloads 129
654 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis

Authors: Jui-Teng Liao

Abstract:

The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.

Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format

Procedia PDF Downloads 88
653 Increasing Adherence to Preventative Care Bundles for Healthcare-Associated Infections: The Impact of Nurse Education

Authors: Lauren G. Coggins

Abstract:

Catheter-associated urinary tract infections (CAUTI) and central line-associated bloodstream infections (CLABSI) are among the most common healthcare-associated infections (HAI), contributing to prolonged lengths of stay, greater costs of patient care, and increased patient mortality. Evidence-based preventative care bundles exist to establish consistent, safe patient-care practices throughout an entire organization, helping to ensure the collective application of care strategies that aim to improve patient outcomes and minimize complications. The cardiac intensive care unit at a nationally ranked teaching and research hospital in the United States exceeded its annual CAUTI and CLABSI targets in the fiscal year 2019, prompting examination into the unit’s infection prevention efforts that included preventative care bundles for both HAIs. Adherence to the CAUTI and CLABSI preventative care bundles was evaluated through frequent audits conducted over three months, using standards and resources from The Joint Commission, a globally recognized leader in quality improvement in healthcare and patient care safety. The bundle elements with the lowest scores were identified as the most commonly missed elements. Three elements from both bundles, six elements in total, served as key content areas for the educational interventions targeted to bedside nurses. The CAUTI elements included appropriate urinary catheter order, appropriate continuation criteria, and urinary catheter care. The CLABSI elements included primary tubing compliance, needleless connector compliance, and dressing change compliance. An integrated, multi-platform education campaign featured content on each CAUTI and CLABSI preventative care bundle in its entirety, with additional reinforcement focused on the lowest scoring elements. One-on-one educational materials included an informational pamphlet, badge buddy, a presentation to reinforce nursing care standards, and real-time application through case studies and electronic health record demonstrations. A digital hub was developed on the hospital’s Intranet for quick access to unit resources, and a bulletin board helped track the number of days since the last CAUTI and CLABSI incident. Audits continued to be conducted throughout the education campaign, and staff were given real-time feedback to address any gaps in adherence. Nearly every nurse in the cardiac intensive care unit received all educational materials, and adherence to all six key bundle elements increased after the implementation of educational interventions. Recommendations from this implementation include providing consistent, comprehensive education across multiple teaching tools and regular audits to track adherence. The multi-platform education campaign brought focus to the evidence-based CAUTI and CLABSI bundles, which in turn will help to reduce CAUTI and CLABSI rates in clinical practice.

Keywords: education, healthcare-associated infections, infection, nursing, prevention

Procedia PDF Downloads 116