Search results for: influence function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12012

Search results for: influence function

1872 Investigation of Existing Guidelines for Four-Legged Angular Telecommunication Tower

Authors: Sankara Ganesh Dhoopam, Phaneendra Aduri

Abstract:

Lattice towers are light weight structures which are primarily governed by the effects of wind loading. Ensuring a precise assessment of wind loads on the tower structure, antennas, and associated equipment is vital for the safety and efficiency of tower design. Earlier, the Indian standards are not available for design of telecom towers. Instead, the industry conventionally relied on the general building wind loading standard for calculating loads on tower components and the transmission line tower design standard for designing the angular members of the towers. Subsequently, the Bureau of Indian Standards (BIS) revised these standards and angular member design standard. While the transmission line towers are designed using the above standard, a full-scale model test will be done to prove the design. Telecom angular towers are also designed using the same with overload factor/factor of safety without full scale tower model testing. General construction in steel design code is available with limit state design approach and is applicable to the design of general structures involving angles and tubes but not used for angle member design of towers. Recently, in response to the evolving industry needs, the Bureau of Indian Standards (BIS) introduced a new standard titled “Isolated Towers, Masts, and Poles using structural steel -Code of practice” for the design of telecom towers. This study focuses on a 40m four legged angular tower to compare loading calculations and member designs between old and new standards. Additionally, a comparative analysis aligning with the new code provisions with international loading and design standards with a specific focus on American standards has been carried out. This paper elaborates code-based provisions used for load and member design calculations, including the influence of "ka" area averaging factor introduced in new wind load case.

Keywords: telecom, angular tower, PLS tower, GSM antenna, microwave antenna, IS 875(Part-3):2015, IS 802(Part-1/sec-2):2016, IS 800:2007, IS 17740:2022, ANSI/TIA-222G, ANSI/TIA-222H.

Procedia PDF Downloads 71
1871 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study

Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni

Abstract:

Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.

Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation

Procedia PDF Downloads 123
1870 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 71
1869 The Impact of Socio-Cultural and Religious Factors on Omanis Employment in the Hotel Sector

Authors: Masooma Al-Balushi, Tamer Mohamed Atef

Abstract:

The Sultanate of Oman is located on the South-eastern tip of the Arabian Peninsula. It is bordered by the Gulf of Oman and the Arabian Sea and has borders with the United Arab Emirates, Saudi Arabia and Yemen. Arabic is the official language. Islam is the official religion. Islam has a great impact on most Omanis, Shari’a law is the law of Oman. The tribal structure plays an essential role in the lives of Omanis. Most people in the Gulf States bear a tribal name rather than a family name. Religion, tribe, and family are highly influential in shaping individuals’ values and behaviors, and have a very noticeable influence on a person’s career choices. Tourism development has been given special attention by the Sultanate of Oman’s government aspiring that the industry would assist in creating direct job opportunities as well as boost the economy through provision of hard currency to improve the balance of payments. This study aims to assess the impact of socio-cultural and religious factors on Omanis employment in the hotel sector. The socio-cultural and religious factors have serious impacts on Omani employment in the hotel sector. Some employees are concerned about the source of income because of the idea that since the hotel business is based on activities such as serving alcohol and pork, gambling, and accommodating unmarried couples, their source of income would be questionable religion wise. For females, the designated job uniform and the interaction with males are major concerns. Ability to fulfil family obligations for married Omanis, and marriage opportunity for singles were other raised concerns. Whilst the future prosperity of the hotel industry depends on the quality of its people, in Oman, the hospitality industry has failed, for a number of reasons, to project an image that could generate interest amongst Omanis. Furthermore, the characteristics and the very nature of the hotel sector are in direct conflict with Islamic doctrines which are embedded in Omani life and society.

Keywords: culture, society, hotel, hospitality, Islam, Oman

Procedia PDF Downloads 302
1868 Creating Energy Sustainability in an Enterprise

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.

Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure

Procedia PDF Downloads 101
1867 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 123
1866 The Effects of Cooling during Baseball Games on Perceived Exertion and Core Temperature

Authors: Chih-Yang Liao

Abstract:

Baseball is usually played outdoors in the warmest months of the year. Therefore, baseball players are susceptible to the influence of the hot environment. It has been shown that hitting performance is increased in games played in warm weather, compared to in cold weather, in Major League Baseball. Intermittent cooling during sporting events can prevent the risk of hyperthermia and increase endurance performance. However, the effects of cooling during baseball games played in a hot environment are unclear. This study adopted a cross-over design. Ten Division I collegiate male baseball players in Taiwan volunteered to participate in this study. Each player played two simulated baseball games, with one day in between. Five of the players received intermittent cooling during the first simulated game, while the other five players received intermittent cooling during the second simulated game. The participants were covered in neck and forehand regions for 6 min with towels that were soaked in icy salt water 3 to 4 times during the games. The participants received the cooling treatment in the dugout when they were not on the field for defense or hitting. During the 2 simulated games, the temperature was 31.1-34.1°C and humidity was 58.2-61.8%, with no difference between the two games. Ratings of perceived exertion, thermal sensation, tympanic and forehead skin temperature immediately after each defensive half-inning and after cooling treatments were recorded. Ratings of perceived exertion were measured using the Borg 10-point scale. The thermal sensation was measured with a 6-point scale. The tympanic and skin temperature was measured with infrared thermometers. The data were analyzed with a two-way analysis of variance with repeated measurement. The results showed that intermitted cooling significantly reduced ratings of perceived exertion and thermal sensation. Forehead skin temperature was also significantly decreased after cooling treatments. However, the tympanic temperature was not significantly different between the two trials. In conclusion, intermittent cooling in the neck and forehead regions was effective in alleviating the perceived exertion and heat sensation. However, this cooling intervention did not affect the core temperature. Whether intermittent cooling has any impact on hitting or pitching performance in baseball players warrants further investigation.

Keywords: baseball, cooling, ratings of perceived exertion, thermal sensation

Procedia PDF Downloads 139
1865 Microwave-Assisted Alginate Extraction from Portuguese Saccorhiza polyschides – Influence of Acid Pretreatment

Authors: Mário Silva, Filipa Gomes, Filipa Oliveira, Simone Morais, Cristina Delerue-Matos

Abstract:

Brown seaweeds are abundant in Portuguese coastline and represent an almost unexploited marine economic resource. One of the most common species, easily available for harvesting in the northwest coast, is Saccorhiza polyschides grows in the lowest shore and costal rocky reefs. It is almost exclusively used by local farmers as natural fertilizer, but contains a substantial amount of valuable compounds, particularly alginates, natural biopolymers of high interest for many industrial applications. Alginates are natural polysaccharides present in cell walls of brown seaweed, highly biocompatible, with particular properties that make them of high interest for the food, biotechnology, cosmetics and pharmaceutical industries. Conventional extraction processes are based on thermal treatment. They are lengthy and consume high amounts of energy and solvents. In recent years, microwave-assisted extraction (MAE) has shown enormous potential to overcome major drawbacks that outcome from conventional plant material extraction (thermal and/or solvent based) techniques, being also successfully applied to the extraction of agar, fucoidans and alginates. In the present study, acid pretreatment of brown seaweed Saccorhiza polyschides for subsequent microwave-assisted extraction (MAE) of alginate was optimized. Seaweeds were collected in Northwest Portuguese coastal waters of the Atlantic Ocean between May and August, 2014. Experimental design was used to assess the effect of temperature and acid pretreatment time in alginate extraction. Response surface methodology allowed the determination of the optimum MAE conditions: 40 mL of HCl 0.1 M per g of dried seaweed with constant stirring at 20ºC during 14h. Optimal acid pretreatment conditions have enhanced significantly MAE of alginates from Saccorhiza polyschides, thus contributing for the development of a viable, more environmental friendly alternative to conventional processes.

Keywords: acid pretreatment, alginate, brown seaweed, microwave-assisted extraction, response surface methodology

Procedia PDF Downloads 366
1864 The Influence of Partial Replacement of Hydrated Lime by Pozzolans on Properties of Lime Mortars

Authors: Przemyslaw Brzyski, Stanislaw Fic

Abstract:

Hydrated lime, because of the life cycle (return to its natural form as a result of the setting and hardening) has a positive environmental impact. The lime binder is used in mortars. Lime is a slow setting binder with low mechanical properties. The aim of the study was to evaluate the possibility of improving the properties of the lime binder by using different pozzolanic materials as partial replacement of hydrated lime binder. Pozzolan materials are the natural or industrial waste, so do not affect the environmental impact of the lime binder. The following laboratory tests were performed: the analysis of the physical characteristics of the tested samples of lime mortars (bulk density, porosity), flexural and compressive strength, water absorption and the capillary rise of samples and consistency of fresh mortars. As a partial replacement of hydrated lime (in the amount of 10%, 20%, 30% by weight of lime) a metakaolin, silica fume, and zeolite were used. The shortest setting and hardening time showed mortars with the addition of metakaolin. All additives noticeably improved strength characteristic of lime mortars. With the increase in the amount of additive, the increase in strength was also observed. The highest flexural strength was obtained by using the addition of metakaolin in an amount of 20% by weight of lime (2.08 MPa). The highest compressive strength was obtained by using also the addition of metakaolin but in an amount of 30% by weight of lime (9.43 MPa). The addition of pozzolan caused an increase in the mortar tightness which contributed to the limitation of absorbability. Due to the different surface area, pozzolanic additives affected the consistency of fresh mortars. Initial consistency was assumed as plastic. Only the addition of silica fume an amount of 20 and 30% by weight of lime changed the consistency to the thick-plastic. The conducted study demonstrated the possibility of applying lime mortar with satisfactory properties. The features of lime mortars do not differ significantly from cement-based mortar properties and show a lower environmental impact due to CO₂ absorption during lime hardening. Taking into consideration the setting time, strength and consistency, the best results can be obtained with metakaolin addition to the lime mortar.

Keywords: lime, binder, mortar, pozzolan, properties

Procedia PDF Downloads 190
1863 Civil Discourse in the Digital Age: Perceptions of Age as a Barrier to Civic Engagement

Authors: Julianne Viola

Abstract:

Young people are at a critical stage in their lives, developing from young participants to adult participants in democratic society. At this time, civic engagement is crucial for young people’s sense of belonging and future participation in their communities. In adolescence, individuals form their own identities and associations with others and may accomplish this with the help of technology and social media. In the Digital Age, young people and adults use technology as a platform to discuss political issues, including human rights and social justice but do not always engage in civil discourse. There is an urgent need to investigate this complex interplay of social media, identity formation, and civil discourse as it relates to how teenagers become participants in democratic society and how they engage in civil discourse. This qualitative study draws on theories of identity formation in adolescence and is situated within the literature surrounding teen civic engagement and technology use. Through in-depth interviews with participants ages 14 through 17, this study investigates the ways in which teens conceptualize their civic identities and engagement, presence online, and civil discourse. The context in which the young people in this study have grown up has the potential to impact and inform these processes. Early results of this study illustrate what it means to be a young person in today’s world, and how perceptions of others’ opinions may influence young people’s engagement in their communities and online. Participants in this study often indicated concerns of their age as a constraint on participation in their communities and in society, and a self-imposed restriction around the people with whom they engage in conversation about political and social issues. While the participants shared common concerns and experiences, each participant’s unique perspectives and beliefs are viewed with equal importance. The results from this research will help students, teachers, and community groups learn about the reasons for engagement and disengagement among this age group, and how technology has influenced teens’ dialogue about political issues. With this knowledge, academics and school leaders can devise new ways to best teach citizenship skills and civil discourse to students in the Digital Age.

Keywords: civics, digital age, discourse, sociology of youth, youth studies

Procedia PDF Downloads 251
1862 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 227
1861 Structural and Functional Comparison of Untagged and Tagged EmrE Protein

Authors: S. Junaid S. Qazi, Denice C. Bay, Raymond Chew, Raymond J. Turner

Abstract:

EmrE, a member of the small multidrug resistance protein family in bacteria is considered to be the archetypical member of its family. It confers host resistance to a wide variety of quaternary cation compounds (QCCs) driven by proton motive force. Generally, purification yield is a challenge in all membrane proteins because of the difficulties in their expression, isolation and solubilization. EmrE is extremely hydrophobic which make the purification yield challenging. We have purified EmrE protein using two different approaches: organic solvent membrane extraction and hexahistidine (his6) tagged Ni-affinity chromatographic methods. We have characterized changes present between ligand affinity of untagged and his6-tagged EmrE proteins in similar membrane mimetic environments using biophysical experimental techniques. Purified proteins were solubilized in a buffer containing n-dodecyl-β-D-maltopyranoside (DDM) and the conformations in the proteins were explored in the presence of four QCCs, methyl viologen (MV), ethidium bromide (EB), cetylpyridinium chloride (CTP) and tetraphenyl phosphonium (TPP). SDS-Tricine PAGE and dynamic light scattering (DLS) analysis revealed that the addition of QCCs did not induce higher multimeric forms of either proteins at all QCC:EmrE molar ratios examined under the solubilization conditions applied. QCC binding curves obtained from the Trp fluorescence quenching spectra, gave the values of dissociation constant (Kd) and maximum specific one-site binding (Bmax). Lower Bmax values to QCCs for his6-tagged EmrE shows that the binding sites remained unoccupied. This lower saturation suggests that the his6-tagged versions provide a conformation that prevents saturated binding. Our data demonstrate that tagging an integral membrane protein can significantly influence the protein.

Keywords: small multidrug resistance (SMR) protein, EmrE, integral membrane protein folding, quaternary ammonium compounds (QAC), quaternary cation compounds (QCC), nickel affinity chromatography, hexahistidine (His6) tag

Procedia PDF Downloads 372
1860 Prevalence and Clinical Significance of Antiphospholipid Antibodies in COVID-19 Patients Admitted to Intensive Care Units

Authors: Mostafa Najim, Alaa Rahhal, Fadi Khir, Safae Abu Yousef, Amer Aljundi, Feryal Ibrahim, Aliaa Amer, Ahmed Soliman Mohamed, Samira Saleh, Dekra Alfaridi, Ahmed Mahfouz, Sumaya Al-Yafei, Faraj Howady, Mohamad Yahya Khatib, Samar Alemadi

Abstract:

Background: Coronavirus disease 2019 (COVID-19) increases the risk of coagulopathy among critically ill patients. Although the presence of antiphospholipid antibodies (aPLs) has been proposed as a possible mechanism of COVID-19 induced coagulopathy, their clinical significance among critically ill patients with COVID-19 remains uncertain. Methods: This prospective observational study included patients with COVID-19 admitted to intensive care units (ICU) to evaluate the prevalence and clinical significance of aPLs, including anticardiolipin IgG/IgM, anti-β2-glycoprotein IgG/IgM, and lupus anticoagulant. The study outcomes included the prevalence of aPLs, a primary composite outcome of all-cause mortality, and arterial or venous thrombosis among aPLs positive patients versus aPLs negative patients during their ICU stay. Multiple logistic regression was used to assess the influence of aPLs on the primary composite outcome of mortality and thrombosis. Results: A total of 60 critically ill patients were enrolled. Of whom, 57 (95%) were male, with a mean age of 52.8 ± 12.2 years, and the majority were from Asia (68%). Twenty-two patients (37%) were found to have positive aPLs; of whom 21 patients were positive for lupus anticoagulant, whereas one patient was positive for anti-β2-glycoprotein IgG/IgM. The composite outcome of mortality and thrombosis during ICU did not differ among patients with positive aPLs compared to those with negative aPLs (4 (18%) vs. 6 (16%), aOR= 0.98, 95% CI 0.1-6.7; p-value= 0.986). Likewise, the secondary outcomes, including all-cause mortality, venous thrombosis, arterial thrombosis, discharge from ICU, time to mortality, and time to discharge from ICU, did not differ between those with positive aPLs upon ICU admission in comparison to patients with negative aPLs. Conclusion: The presence of aPLs does not seem to affect the outcomes of critically ill patients with COVID-19 in terms of all-cause mortality and thrombosis. Therefore, clinicians may not screen critically ill patients with COVID-19 for aPLs unless deemed clinically appropriate.

Keywords: antiphospholipid antibodies, critically ill patients, coagulopathy, coronavirus

Procedia PDF Downloads 159
1859 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 198
1858 Engineering Microstructural Evolution during Arc Wire Directed Energy Deposition of Magnesium Alloy (AZ31)

Authors: Nivatha Elangovan, Lakshman Neelakantan, Murugaiyan Amirthalingam

Abstract:

Magnesium and its alloys are widely used for various lightweight engineering and biomedical applications as they render high strength to low weight ratio and excellent corrosion resistance. These alloys possess good bio-compatibility and similar mechanical properties to natural bone. However, manufacturing magnesium alloy components by conventional formative and subtractive methods is challenging due to their poor castability, oxidation potential, and machinability. Therefore, efforts are made to produce complex-design containing magnesium alloy components by additive manufacturing (AM). Arc-wire directed energy deposition (AW-DED), also known as wire arc additive manufacturing (WAAM), is more attractive to produce large volume components with increased productivity than any other AM technique. In this research work, efforts were made to optimise the deposition parameters to build thick-walled (about 10 mm) AZ31 magnesium alloy components by a gas metal arc (GMA) based AW-DED process. By using controlled dip short-circuiting metal transfer in a GMA process, depositions were carried out without defects and spatter formation. Current and voltage waveforms were suitably modified to achieve stable metal transfer. Moreover, the droplet transfer behaviour was analysed using high-speed image analysis and correlated with arc energy. Optical and scanning electron microscopy analyses were carried out to correlate the influence of deposition parameters with the microstructural evolution during deposition. The investigation reveals that by carefully controlling the current-voltage waveform and droplet transfer behaviour, it is possible to stabilise equiaxed grain microstructures in the deposited AZ31 components. The printed component exhibited an improved mechanical property as equiaxed grains improve the ductility and enhance the toughness. The equiaxed grains in the component improved the corrosion-resistant behaviour of other conventionally manufactured components.

Keywords: arc wire directed energy deposition, AZ31 magnesium alloy, equiaxed grain, corrosion

Procedia PDF Downloads 116
1857 Effect of Sodium Aluminate on Compressive Strength of Geopolymer at Elevated Temperatures

Authors: Ji Hoi Heo, Jun Seong Park, Hyo Kim

Abstract:

Geopolymer is an inorganic material synthesized by alkali activation of source materials rich in soluble SiO2 and Al2O3. Many researches have studied the effect of aluminum species on the synthesis of geopolymer. However, it is still unclear about the influence of Al additives on the properties of geopolymer. The current study identified the role of the Al additive on the thermal performance of fly ash based geopolymer and observing the microstructure development of the composite. NaOH pellets were dissolved in water for 14 M (14 moles/L) sodium hydroxide solution which was used as an alkali activator. The weight ratio of alkali activator to fly ash was 0.40. Sodium aluminate powder was employed as an Al additive and added in amounts of 0.5 wt.% to 2 wt.% by the weight of fly ash. The mixture of alkali activator and fly ash was cured in a 75°C dry oven for 24 hours. Then, the hardened geopolymer samples were exposed to 300°C, 600°C and 900°C for 2 hours, respectively. The initial compressive strength after oven curing increased with increasing sodium aluminate content. It was also observed in SEM results that more amounts of geopolymer composite were synthesized as sodium aluminate was added. The compressive strength increased with increasing heating temperature from 300°C to 600°C regardless of sodium aluminate addition. It was consistent with the ATR-FTIR results that the peak position related to asymmetric stretching vibrations of Si-O-T (T: Si or Al) shifted to higher wavenumber as the heating temperature increased, indicating the further geopolymer reaction. In addition, geopolymer sample with higher content of sodium aluminate showed better compressive strength. It was also reflected on the IR results by more shift of the peak position assigned to Si-O-T toward the higher wavenumber. However, the compressive strength decreased after being exposed to 900°C in all samples. The degree of reduction in compressive strength was decreased with increasing sodium aluminate content. The deterioration in compressive strength was most severe in the geopolymer sample without sodium aluminate additive, while the samples with sodium aluminate addition showed better thermal durability at 900°C. This is related to the phase transformation with the occurrence of nepheline phase at 900°C, which was most predominant in the sample without sodium aluminate. In this work, it was concluded that sodium aluminate could be a good additive in the geopolymer synthesis by showing the improved compressive strength at elevated temperatures.

Keywords: compressive strength, fly ash based geopolymer, microstructure development, Na-aluminate

Procedia PDF Downloads 118
1856 Activity of Resveratrol on the Influence of Aflatoxin B1 on the Testes of Sprague Dawley Rats

Authors: Ali D. Omur, Betul Apaydin Yildirim, Yavuz S. Saglam, Selim Comakli, Mustafa Ozkaraca

Abstract:

Twenty-eight male Sprague Dawley rats (aged 3 months) were used in the study. The animals were given feed and water as ad libitum. Sprague Dawley rats were randomly divided into 4 groups as 7 rats in each group. Aflatoxin B1 (7.5 μg/200 g), resveratrol (60 mg/kg) was administered to rats in groups other than the control group. At the end of the 16th day, blood, semen and tissue specimens were taken by decapitation under ether anesthesia. The effects of aflatoxin B1 and resveratrol on spermatological, pathological and biochemical parameters were determined in rats. When we evaluate the spermatological parameters, it is understood that resveratrol has a statistically significant difference in terms of sperm motility and viability (membrane integrity) compared to the control group and aflatoxin B1 administration groups, indicating a protective effect on spermatological parameters (groups: control, resveratrol, aflatoxin B1 and Afb1 + res; respectively, values of motility: 71,42 ± 0,52b, 72,85 ± 1, 48c , 60,71 ± 1,30a, 57,14 ± 2, 40a; values of viability: 63,85 ± 1,33b, 70,42 ± 2,61c, 55,00 ± 1,54a, 56,57 ± 0,89a. In terms of pathological parameters -histopathological examination- in the control and resveratrol groups, seminiferous tubules were observed to be in normal structure. In the group treated with aflatoxin, the regular structure of the spermatogenic cells deteriorated, and the seminiferous tubules became necrotic and degenerative. In the group treated with Afb1 + res, the decreasing of necrotic and degenerative changes were determined compared with in the group treated with aflatoxin. As immunohistochemical examination, cleaved caspase 3 expression was found to be very low in the control and resveratrol groups. Cleaved caspase 3 expression was severely exacerbated in seminiferous tubules in aflatoxin group but cleaved caspase 3 expression level decreased in Afb1 + res. In the biochemical direction, resveratrol has been shown to inhibit the adverse effects of aflatoxin on antioxidant levels (GSH-mmol/L, CAT-kU/L, GPx-U/mL, SOD-EU/mL) and to show a protective effect. For this purpose, the use of resveratrol with antioxidant activity was investigated in preventing or ameliorating damage to aflatoxin B1. It has been concluded that resveratrol effectively prevents the aflatoxin-induced testicular damage and lipid peroxidation. It has also been shown that resveratrol has protective effects on sperm motility and viability.

Keywords: Aflatoxin B1, rat, resveratrol, sperm

Procedia PDF Downloads 354
1855 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 55
1854 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System

Authors: Nicolas M. Beleski, Gustavo A. G. Lugo

Abstract:

Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.

Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind

Procedia PDF Downloads 126
1853 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 107
1852 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance

Authors: Eva Laryea, Clement Yeboah Authors

Abstract:

A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.

Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety

Procedia PDF Downloads 57
1851 Heteroatom Doped Binary Metal Oxide Modified Carbon as a Bifunctional Electrocatalysts for all Vanadium Redox Flow Battery

Authors: Anteneh Wodaje Bayeh, Daniel Manaye Kabtamu, Chen-Hao Wang

Abstract:

As one of the most promising electrochemical energy storage systems, vanadium redox flow batteries (VRFBs) have received increasing attention owing to their attractive features for largescale storage applications. However, their high production cost and relatively low energy efficiency still limit their feasibility. For practical implementation, it is of great interest to improve their efficiency and reduce their cost. One of the key components of VRFBs that can greatly influence the efficiency and final cost is the electrode, which provide the reactions sites for redox couples (VO²⁺/VO₂ + and V²⁺/V³⁺). Carbon-based materials are considered to be the most feasible electrode materials in the VRFB because of their excellent potential in terms of operation range, good permeability, large surface area, and reasonable cost. However, owing to limited electrochemical activity and reversibility and poor wettability due to its hydrophobic properties, the performance of the cell employing carbon-based electrodes remained limited. To address the challenges, we synthesized heteroatom-doped bimetallic oxide grown on the surface of carbon through the one-step approach. When applied to VRFBs, the prepared electrode exhibits significant electrocatalytic effect toward the VO²⁺/VO₂ + and V³⁺/V²⁺ redox reaction compared with that of pristine carbon. It is found that the presence of heteroatom on metal oxide promotes the absorption of vanadium ions. The controlled morphology of bimetallic metal oxide also exposes more active sites for the redox reaction of vanadium ions. Hence, the prepared electrode displays the best electrochemical performance with energy and voltage efficiencies of 74.8% and 78.9%, respectively, which is much higher than those of 59.8% and 63.2% obtained from the pristine carbon at high current density. Moreover, the electrode exhibit durability and stability in an acidic electrolyte during long-term operation for 1000 cycles at the higher current density.

Keywords: VRFB, VO²⁺/VO₂ + and V³⁺/V²⁺ redox couples, graphite felt, heteroatom-doping

Procedia PDF Downloads 89
1850 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 450
1849 The Impact of the Plagal Cadence on Nineteenth-Century Music

Authors: Jason Terry

Abstract:

Beginning in the mid-nineteenth century, hymns in the Anglo-American tradition often ended with the congregation singing ‘amen,’ most commonly set to a plagal cadence. While the popularity of this tradition is well-known still today, this research presents the origins of this custom. In 1861, Hymns Ancient & Modern deepened this convention by concluding each of its hymns with a published plagal-amen cadence. Subsequently, hymnals from a variety of denominations throughout Europe and the United States heavily adopted this practice. By the middle of the twentieth century the number of participants singing this cadence had suspiciously declined; however, it was not until the 1990s that the plagal-amen cadence all but disappeared from hymnals. Today, it is rare for songs to conclude with the plagal-amen cadence, although instrumentalists have continued to regularly play a plagal cadence underneath the singers’ sustained finalis. After examining a variety of music theory treatises, eighteenth-century newspaper articles, manuscripts & hymnals from the last five centuries, and conducting interviews with a number of scholars around the world, this study presents the context of the plagal-amen cadence through its history. The association of ‘amen’ and the plagal cadence was already being discussed during the late eighteenth century, and the plagal-amen cadence only grew in attractiveness from that time forward, most notably in the nineteenth and twentieth centuries. Throughout this research, the music of Thomas Tallis, primarily through his Preces and Responses, is reasonably shown to be the basis for the high status of the plagal-amen cadence in nineteenth- and twentieth-century society. Tallis’s immediate influence was felt among his contemporary English composers as well as posterity, all of whom were well-aware of his compositional styles and techniques. More importantly, however, was the revival of his music in nineteenth-century England, which had a greater impact on the plagal-amen tradition. With his historical title as the father of English cathedral music, Tallis was favored by the supporters of the Oxford Movement. Thus, with society’s view of Tallis, the simple IV–I cadence he chose to pair with ‘amen’ attained a much greater worth in the history of Western music. A musical device such as the once-revered plagal-amen cadence deserves to be studied and understood in a more factual light than has thus far been available to contemporary scholars.

Keywords: amen cadence, Plagal-amen cadence, singing hymns with amen, Thomas Tallis

Procedia PDF Downloads 227
1848 'Coping with Workplace Violence' Workshop: A Commendable Addition to the Curriculum for BA in Nursing

Authors: Ilana Margalith, Adaya Meirowitz, Sigalit Cohavi

Abstract:

Violence against health professionals by patients and their families have recently become a disturbing phenomenon worldwide, exacting psychological as well as economic tolls. Health workplaces in Israel (e.g. hospitals and H.M.O clinics) provide workshops for their employees, supplying them with coping strategies. However, these workshops do not focus on nursing students, who are also subjected to this violence. Their learning environment is no longer as protective as it used to be. Furthermore, coping with violence was not part of the curriculum for Israeli nursing students. Thus, based on human aggression theories which depict the pivotal role of the professional's correct response in preventing the onset of an aggressive response or the escalation of violence, a workshop was developed for undergraduate nursing students at the Clalit Nursing Academy, Rabin Campus (Dina), Israel. The workshop aimed at reducing students' anxiety vis a vis the aggressive patient or family in addition to strengthening their ability to cope with such situations. The students practiced interpersonal skills, especially relevant to early detection of potential violence, as well as ‘a correct response’ reaction to the violence, thus developing the necessary steps to be implemented when encountering violence in the workplace. In order to assess the efficiency of the workshop, the participants filled out a questionnaire comprising knowledge and self-efficacy scales. Moreover, the replies of the 23 participants in this workshop were compared with those of 24 students who attended a standard course on interpersonal communication. Students' self-efficacy and knowledge were measured in both groups before and after the course. A statistically significant interaction was found between group (workshop/standard course) and time (before/after) as to the influence on students' self-efficacy (p=0.004) and knowledge (p=0.007). Nursing students, who participated in this ‘coping with workplace violence’ workshop, gained knowledge, confidence and a sense of self-efficacy with regard to workplace violence. Early detection of signs of imminent violence amongst patients or families and the prevention of its escalation, as well as the ability to manage the threatening situation when occurring, are acquired skills. Encouraging nursing students to learn and practice these skills may enhance their ability to cope with these unfortunate occurrences.

Keywords: early detection of violence, nursing students, patient aggression, self-efficacy, workplace violence

Procedia PDF Downloads 132
1847 An Investigative Study into Good Governance in the Non-Profit Sector in South Africa: A Systems Approach Perspective

Authors: Frederick M. Dumisani Xaba, Nokuthula G. Khanyile

Abstract:

There is a growing demand for greater accountability, transparency and ethical conduct based on sound governance principles in the developing world. Funders, donors and sponsors are increasingly demanding more transparency, better value for money and adherence to good governance standards. The drive towards improved governance measures is largely influenced by the need to ‘plug the leaks’, deal with malfeasance, engender greater levels of accountability and good governance and to ultimately attract further funding or investment. This is the case with the Non-Profit Organizations (NPOs) in South Africa in general, and in the province of KwaZulu-Natal in particular. The paper draws from the good governance theory, stakeholder theory and systems thinking to critically examine the requirements for good governance for the NPO sector from a theoretical and legislative point and to systematically looks at the contours of governance currently among the NPOs. The paper did this through the rigorous examination of the vignettes of cases of governance among selected NPOs based in KwaZulu-Natal. The study used qualitative and quantitative research methodologies through document analysis, literature review, semi-structured interviews, focus groups and statistical analysis from the various primary and secondary sources. It found some good cases of good governance but also found frightening levels of poor governance. There was an exponential growth of NPOs registered during the period under review, equally so there was an increase in cases of non-compliance to good governance practices. NPOs operate in an increasingly complex environment. There is contestation for influence and access to resources. Stakeholder management is poorly conceptualized and executed. Recognizing that the NPO sector operates in an environment characterized by complexity, constant changes, unpredictability, contestation, diversity and divergent views of different stakeholders, there is a need to apply legislative and systems thinking approaches to strengthen governance to withstand this turbulence through a capacity development model that recognizes these contextual and environmental challenges.

Keywords: good governance, non-profit organizations, stakeholder theory, systems theory

Procedia PDF Downloads 118
1846 Remote Learning During Pandemic: Malaysian Classroom

Authors: Hema Vanita Kesevan

Abstract:

The global spread of Covid-19 virus in early 2020 has led to major changes in many walks of life, including the education system. Traditional face to face lessons that were carried out for years has been replaced by online learning. Although online learning has been used before the pandemic, it has not been the only source of teaching and learning. This drastic change has brought significant impact to the process of teaching and learning in many classrooms around the world. Likewise, in country like Malaysia that that has been promoting online learning but has not utilize it fully due to many restrictions in terms of technology, accessibility, and online literacy, the sudden change to full online platform learning in all educational sector has definitely caused Issues in terms of its adaptation and usage. Although many studies have been conducted to explore the efficiency and impact of online learning during the pandemic, studies focusing on the same are limited in Malaysian classroom context, especially in English language classrooms. Thus, this study seeks to explore on the efficacy and effectiveness of online learning tools in ESL classroom contexts during the pandemic. The aim of this study is to understand the educator's and student's perceptions on the implementation of online learning tools in the teaching and learning process and the types of online learning tools that were used to assist the teaching and learning process during the pandemic. Particularly, this study focused to explore the types of online learning tools used in Malaysian schools and university during the online teaching and learning process and further explores how the various types of tools used impacted the students' participation in the lessons conducted. The participants of this study are secondary school students, teachers, and university students. Data will be collected in terms of survey questionnaire and interviews. The survey data intends to obtain information on the types of online learning used in ESL teaching and learning practices during the pandemic, how the various types of online tools influence students' participation during lessons. The interview data from the teachers serves to provide information about the selection of online learning tools, challenges of using it to conduct online lessons, and other arising issues. A mixed method design will be used to analysed the data obtained. The questionnaire will be analysed quantitatively using descriptive analysis meanwhile, the interview data will be analysed qualitatively.

Keywords: Covid 19, online learning tools, ESL classroom, effectiveness, efficacy

Procedia PDF Downloads 230
1845 Knowledge, Perceptions, and Barriers of Preconception Care among Healthcare Workers in Nigeria

Authors: Taiwo Hassanat Bawa-Muhammad, Opeoluwa Hope Adegoke

Abstract:

Introduction: This study aims to examine the knowledge and perceptions of preconception care among healthcare workers in Nigeria, recognizing its crucial role in ensuring safe pregnancies. Despite its significance, awareness of preconception care remains low in the country. The study seeks to assess the understanding of preconception services and identify the barriers that hinder their efficacy. Methods: Through semi-structured interviews, 129 healthcare workers across six states in Nigeria were interviewed between January and March 2023. The interviews explored the healthcare workers' knowledge of preconception care practices, the socio-cultural influences shaping decision-making, and the challenges that limit accessibility and utilization of preconception care services. Results: The findings reveal a limited knowledge of preconception care among healthcare workers, primarily due to inadequate information dissemination within the healthcare system. Additionally, cultural beliefs significantly influence perceptions surrounding preconception care. Furthermore, financial constraints, distance to healthcare facilities, and poor health infrastructure disproportionately restrict access to preconception services, particularly for vulnerable populations. The study also highlights insufficient skills and outdated training among healthcare workers regarding preconception guidance, primarily attributed to limited opportunities for professional development. Discussion: To improve preconception care in Nigeria, comprehensive education programs must be implemented, taking into account the societal influences that shape perceptions and behaviors. These programs should aim to dispel myths and promote evidence-based practices. Additionally, training healthcare workers and integrating preconception care services into primary care settings, with support from religious and community leaders, can help overcome barriers to access. Strategies should prioritize affordability while emphasizing the broader benefits of preconception care beyond fertility concerns alone. Lastly, widespread literacy campaigns utilizing trusted channels are crucial for effectively disseminating information and promoting the adoption of preconception practices in Nigeria.

Keywords: preconception care, knowledge, healthcare workers, Nigeria, barriers, education, training

Procedia PDF Downloads 84
1844 Examining the Effects of Increasing Lexical Retrieval Attempts in Tablet-Based Naming Therapy for Aphasia

Authors: Jeanne Gallee, Sofia Vallila-Rohter

Abstract:

Technology-based applications are increasingly being utilized in aphasia rehabilitation as a means of increasing intensity of treatment and improving accessibility to treatment. These interactive therapies, often available on tablets, lead individuals to complete language and cognitive rehabilitation tasks that draw upon skills such as the ability to name items, recognize semantic features, count syllables, rhyme, and categorize objects. Tasks involve visual and auditory stimulus cues and provide feedback about the accuracy of a person’s response. Research has begun to examine the efficacy of tablet-based therapies for aphasia, yet much remains unknown about how individuals interact with these therapy applications. Thus, the current study aims to examine the efficacy of a tablet-based therapy program for anomia, further examining how strategy training might influence the way that individuals with aphasia engage with and benefit from therapy. Individuals with aphasia are enrolled in one of two treatment paradigms: traditional therapy or strategy therapy. For ten weeks, all participants receive 2 hours of weekly in-house therapy using Constant Therapy, a tablet-based therapy application. Participants are provided with iPads and are additionally encouraged to work on therapy tasks for one hour a day at home (home logins). For those enrolled in traditional therapy, in-house sessions involve completing therapy tasks while a clinician researcher is present. For those enrolled in the strategy training group, in-house sessions focus on limiting cue use in order to maximize lexical retrieval attempts and naming opportunities. The strategy paradigm is based on the principle that retrieval attempts may foster long-term naming gains. Data have been collected from 7 participants with aphasia (3 in the traditional therapy group, 4 in the strategy training group). We examine cue use, latency of responses and accuracy through the course of therapy, comparing results across group and setting (in-house sessions vs. home logins).

Keywords: aphasia, speech-language pathology, traumatic brain injury, language

Procedia PDF Downloads 198
1843 Experimental Study of Sand-Silt Mixtures with Torsional and Flexural Resonant Column Tests

Authors: Meghdad Payan, Kostas Senetakis, Arman Khoshghalb, Nasser Khalili

Abstract:

Dynamic properties of soils, especially at the range of very small strains, are of particular interest in geotechnical engineering practice for characterization of the behavior of geo-structures subjected to a variety of stress states. This study reports on the small-strain dynamic properties of sand-silt mixtures with particular emphasis on the effect of non-plastic fines content on the small strain shear modulus (Gmax), Young’s Modulus (Emax), material damping (Ds,min) and Poisson’s Ratio (v). Several clean sands with a wide range of grain size characteristics and particle shape are mixed with variable percentages of a silica non-plastic silt as fines content. Prepared specimens of sand-silt mixtures at different initial void ratios are subjected to sequential torsional and flexural resonant column tests with elastic dynamic properties measured along an isotropic stress path up to 800 kPa. It is shown that while at low percentages of fines content, there is a significant difference between the dynamic properties of the various samples due to the different characteristics of the sand portion of the mixtures, this variance diminishes as the fines content increases and the soil behavior becomes mainly silt-dominant, rendering no significant influence of sand properties on the elastic dynamic parameters. Indeed, beyond a specific portion of fines content, around 20% to 30% typically denoted as threshold fines content, silt is controlling the behavior of the mixture. Using the experimental results, new expressions for the prediction of small-strain dynamic properties of sand-silt mixtures are developed accounting for the percentage of silt and the characteristics of the sand portion. These expressions are general in nature and are capable of evaluating the elastic dynamic properties of sand-silt mixtures with any types of parent sand in the whole range of silt percentage. The inefficiency of skeleton void ratio concept in the estimation of small-strain stiffness of sand-silt mixtures is also illustrated.

Keywords: damping ratio, Poisson’s ratio, resonant column, sand-silt mixture, shear modulus, Young’s modulus

Procedia PDF Downloads 248