Search results for: human auditory system model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 34960

Search results for: human auditory system model

1090 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology

Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James

Abstract:

Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.

Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing

Procedia PDF Downloads 118
1089 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 91
1088 Carbon Footprint of Educational Establishments: The Case of the University of Alicante

Authors: Maria R. Mula-Molina, Juan A. Ferriz-Papi

Abstract:

Environmental concerns are increasingly obtaining higher priority in sustainability agenda of educational establishments. This is important not only for its environmental performance in its own right as an organization, but also to present a model for its students. On the other hand, universities play an important role on research and innovative solutions for measuring, analyzing and reducing environmental impacts for different activities. The assessment and decision-making process during the activity of educational establishments is linked to the application of robust indicators. In this way, the carbon footprint is a developing indicator for sustainability that helps understand the direct impact on climate change. But it is not easy to implement. There is a large amount of considering factors involved that increases its complexity, such as different uses at the same time (research, lecturing, administration), different users (students, staff) or different levels of activity (lecturing, exam or holidays periods). The aim of this research is to develop a simplified methodology for calculating and comparing carbon emissions per user at university campus considering two main aspects for carbon accountings: Building operations and transport. Different methodologies applied in other Spanish university campuses are analyzed and compared to obtain a final proposal to be developed in this type of establishments. First, building operation calculation considers the different uses and energy sources consumed. Second, for transport calculation, the different users and working hours are calculated separately, as well as their origin and traveling preferences. For every transport, a different conversion factor is used depending on carbon emissions produced. The final result is obtained as an average of carbon emissions produced per user. A case study is applied to the University of Alicante campus in San Vicente del Raspeig (Spain), where the carbon footprint is calculated. While the building operation consumptions are known per building and month, it does not happen with transport. Only one survey about the habit of transport for users was developed in 2009/2010, so no evolution of results can be shown in this case. Besides, building operations are not split per use, as building services are not monitored separately. These results are analyzed in depth considering all factors and limitations. Besides, they are compared to other estimations in other campuses. Finally, the application of the presented methodology is also studied. The recommendations concluded in this study try to enhance carbon emission monitoring and control. A Carbon Action Plan is then a primary solution to be developed. On the other hand, the application developed in the University of Alicante campus cannot only further enhance the methodology itself, but also render the adoption by other educational establishments more readily possible and yet with a considerable degree of flexibility to cater for their specific requirements.

Keywords: building operations, built environment, carbon footprint, climate change, transport

Procedia PDF Downloads 286
1087 When the Lights Go Down in the Delivery Room: Lessons From a Ransomware Attack

Authors: Rinat Gabbay-Benziv, Merav Ben-Natan, Ariel Roguin, Benyamine Abbou, Anna Ofir, Adi Klein, Dikla Dahan-Shriki, Mordechai Hallak, Boris Kessel, Mickey Dudkiewicz

Abstract:

Introduction: Over recent decades, technology has become integral to healthcare, with electronic health records and advanced medical equipment now standard. However, this reliance has made healthcare systems increasingly vulnerable to ransomware attacks. On October 13, 2021, Hillel Yaffe Medical Center experienced a severe ransomware attack that disrupted all IT systems, including electronic health records, laboratory services, and staff communications. The attack, carried out by the group DeepBlueMagic, utilized advanced encryption to lock the hospital's systems and demanded a ransom. This incident caused significant operational and patient care challenges, particularly impacting the obstetrics department. Objective: The objective is to describe the challenges facing the obstetric division following a cyberattack and discuss ways of preparing for and overcoming another one. Methods: A retrospective descriptive study was conducted in a mid-sized medical center. Division activities, including the number of deliveries, cesarean sections, emergency room visits, admissions, maternal-fetal medicine department occupancy, and ambulatory encounters, from 2 weeks before the attack to 8 weeks following it (a total of 11 weeks), were compared with the retrospective period in 2019 (pre-COVID-19). In addition, we present the challenges and adaptation measures taken at the division and hospital levels leading up to the resumption of full division activity. Results: On the day of the cyberattack, critical decisions were made. The media announced the event, calling on patients not to come to our hospital. Also, all elective activities other than cesarean deliveries were stopped. The number of deliveries, admissions, and both emergency room and ambulatory clinic visits decreased by 5%–10% overall for 11 weeks, reflecting the decrease in division activity. Nevertheless, in all stations, there were sufficient activities and adaptation measures to ensure patient safety, decision-making, and workflow of patients were accounted for. Conclusions: The risk of ransomware cyberattacks is growing. Healthcare systems at all levels should recognize this threat and have protocols for dealing with them once they occur.

Keywords: ransomware attack, healthcare cybersecurity, obstetrics challenges, IT system disruption

Procedia PDF Downloads 11
1086 ATR-IR Study of the Mechanism of Aluminum Chloride Induced Alzheimer Disease - Curative and Protective Effect of Lepidium sativum Water Extract on Hippocampus Rats Brain Tissue

Authors: Maha J. Balgoon, Gehan A. Raouf, Safaa Y. Qusti, Soad S. Ali

Abstract:

The main cause of Alzheimer disease (AD) was believed to be mainly due to the accumulation of free radicals owing to oxidative stress (OS) in brain tissue. The mechanism of the neurotoxicity of Aluminum chloride (AlCl3) induced AD in hippocampus Albino wister rat brain tissue, the curative & the protective effects of Lipidium sativum group (LS) water extract were assessed after 8 weeks by attenuated total reflection spectroscopy ATR-IR and histologically by light microscope. ATR-IR results revealed that the membrane phospholipid undergo free radical attacks, mediated by AlCl3, primary affects the polyunsaturated fatty acids indicated by the increased of the olefinic -C=CH sub-band area around 3012 cm-1 from the curve fitting analysis. The narrowing in the half band width(HBW) of the sνCH2 sub-band around 2852 cm-1 due to Al intoxication indicates the presence of trans form fatty acids rather than gauch rotomer. The degradation of hydrocarbon chain to shorter chain length, increasing in membrane fluidity, disorder and decreasing in lipid polarity in AlCl3 group were indicated by the detected changes in certain calculated area ratios compared to the control. Administration of LS was greatly improved these parameters compared to the AlCl3 group. Al influences the Aβ aggregation and plaque formation, which in turn interferes to and disrupts the membrane structure. The results also showed a marked increase in the β-parallel and antiparallel structure, that characterize the Aβ formation in Al-induced AD hippocampal brain tissue, indicated by the detected increase in both amide I sub-bands around 1674, 1692 cm-1. This drastic increase in Aβ formation was greatly reduced in the curative and protective groups compared to the AlCl3 group and approaches nearly the control values. These results were supported too by the light microscope. AlCl3 group showed significant marked degenerative changes in hippocampal neurons. Most cells appeared small, shrieked and deformed. Interestingly, the administration of LS in curative and protective groups markedly decreases the amount of degenerated cells compared to the non-treated group. Also the intensity of congo red stained cells was decreased. Hippocampal neurons looked more/or less similar to those of control. This study showed a promising therapeutic effect of Lipidium sativum group (LS) on AD rat model that seriously overcome the signs of oxidative stress on membrane lipid and restore the protein misfolding.

Keywords: aluminum chloride, alzheimer disease, ATR-IR, Lipidium sativum

Procedia PDF Downloads 356
1085 Role of Alternative Dispute Resolution (ADR) in Advancing UN-SDG 16 and Pathways to Justice in Kenya: Opportunities and Challenges

Authors: Thomas Njuguna Kibutu

Abstract:

The ability to access justice is an important facet of securing peaceful, just, and inclusive societies, as recognized by Goal 16 of the 2030 Agenda for Sustainable Development. Goal 16 calls for peace, justice, and strong institutions to promote the rule of law and access to justice at a global level. More specifically, Target 16.3 of the Goal aims to promote the rule of law at the national and international levels and ensure equal access to justice for all. On the other hand, it is now widely recognized that Alternative Dispute Resolution (hereafter, ADR) represents an efficient mechanism for resolving disputes outside the adversarial conventional court system of litigation or prosecution. ADR processes include but are not limited to negotiation, reconciliation, mediation, arbitration, and traditional conflict resolution. ADR has a number of advantages, including being flexible, cost-efficient, time-effective, and confidential, and giving the parties more control over the process and the results, thus promoting restorative justice. The methodology of this paper is a desktop review of books, journal articles, reports and government documents., among others. The paper recognizes that ADR represents a cornerstone of Africa’s, and more specifically, Kenya’s, efforts to promote inclusive, accountable, and effective institutions and achieve the objectives of goal 16. In Kenya, and not unlike many African countries, there has been an outcry over the backlog of cases that are yet to be resolved in the courts and the statistics have shown that the numbers keep on rising. While ADR mechanisms have played a major role in reducing these numbers, access to justice in the country remains a big challenge, especially to the subaltern. There is, therefore, a need to analyze the opportunities and challenges facing the application of ADR mechanisms as tools for accessing justice in Kenya and further discuss various ways in which we can overcome these challenges to make ADR an effective alternative to dispute resolution. The paper argues that by embracing ADR across various sectors and addressing existing shortcomings, Kenya can, over time, realize its vision of a more just and equitable society. This paper discusses the opportunities and challenges of the application of ADR in Kenya with a view to sharing the lessons and challenges with the wider African continent. The paper concludes that ADR mechanisms can provide critical pathways to justice in Kenya and the African continent in general but come with distinct challenges. The paper thus calls for concerted efforts of respective stakeholders to overcome these challenges.

Keywords: mediation, arbitration, negotiation, reconsiliation, Traditional conflict resolution, sustainable development

Procedia PDF Downloads 19
1084 Developing Offshore Energy Grids in Norway as Capability Platforms

Authors: Vidar Hepsø

Abstract:

The energy and oil companies on the Norwegian Continental shelf come from a situation where each asset control and manage their energy supply (island mode) and move towards a situation where the assets need to collaborate and coordinate energy use with others due to increased cost and scarcity of electric energy sharing the energy that is provided. Currently, several areas are electrified either with an onshore grid cable or are receiving intermittent energy from offshore wind-parks. While the onshore grid in Norway is well regulated, the offshore grid is still in the making, with several oil and gas electrification projects and offshore wind development just started. The paper will describe the shift in the mindset that comes with operating this new offshore grid. This transition process heralds an increase in collaboration across boundaries and integration of energy management across companies, businesses, technical disciplines, and engagement with stakeholders in the larger society. This transition will be described as a function of the new challenges with increased complexity of the energy mix (wind, oil/gas, hydrogen and others) coupled with increased technical and organization complexity in energy management. Organizational complexity denotes an increasing integration across boundaries, whether these boundaries are company, vendors, professional disciplines, regulatory regimes/bodies, businesses, and across numerous societal stakeholders. New practices must be developed, made legitimate and institutionalized across these boundaries. Only parts of this complexity can be mitigated technically, e.g.: by use of batteries, mixing energy systems and simulation/ forecasting tools. Many challenges must be mitigated with legitimated societal and institutionalized governance practices on many levels. Offshore electrification supports Norway’s 2030 climate targets but is also controversial since it is exploiting the larger society’s energy resources. This means that new systems and practices must also be transparent, not only for the industry and the authorities, but must also be acceptable and just for the larger society. The paper report from ongoing work in Norway, participant observation and interviews in projects and people working with offshore grid development in Norway. One case presented is the development of an offshore floating windfarm connected to two offshore installations and the second case is an offshore grid development initiative providing six installations electric energy via an onshore cable. The development of the offshore grid is analyzed using a capability platform framework, that describes the technical, competence, work process and governance capabilities that are under development in Norway. A capability platform is a ‘stack’ with the following layers: intelligent infrastructure, information and collaboration, knowledge sharing & analytics and finally business operations. The need for better collaboration and energy forecasting tools/capabilities in this stack will be given a special attention in the two use cases that are presented.

Keywords: capability platform, electrification, carbon footprint, control rooms, energy forecsting, operational model

Procedia PDF Downloads 62
1083 Outcome of Dacryocystorhinostomy with Peroperative Local Use of Mitomycin-C

Authors: Chandra Shekhar Majumder, Orin Sultana Jamie

Abstract:

Background: Dacryocystorhinostomy (DCR) has been a widely accepted surgical intervention for nasolacrimal duct obstructions. Some previous studies demonstrated the potential benefits of the peroperative application of agents like Mitomycin-C (MMC) with DCR to improve surgical outcomes. Relevant studies are rare in Bangladesh, and there are controversies about the dose, duration of MMC, and outcome. Therefore, the present study aimed to investigate the comparative efficacy of DCR with and without MMC in a tertiary hospital in Bangladesh. Objective: The study aims to determine the outcome of a dacryocystorhinostomy with preoperative local use of mitomycin–C. Methods: An analytical study was conducted in the Department of Ophthalmology, Sir Salimullah Medical College & Mitford Hospital, Dhaka, from January 2023 to September 2023. Seventy patients who were admitted for DCR operation were included according to the inclusion and exclusion criteria. Patients were divided into two groups: those who underwent DCR with peroperative administration of 0.2 mg/ml Mitomycin-C for 5 minutes (Group I) and those who underwent DCR alone (Group II). All patients were subjected to detailed history taking, clinical examination, and relevant investigations. All patients underwent DCR according to standard guidelines and ensured the highest peroperative and postoperative care. Then, patients were followed up at 7th POD, 1-month POD, 3 months POD, and 6 months POD to observe the success rate between the two groups by assessing tearing condition, irrigation, height of tear meniscus, and FDDT- test. Data was recorded using a pre-structured questionnaire, and collected data were analyzed using SPSS 23. Results: The mean age of the study patients was 42.17±6.7 (SD) years and 42.29±7.1 (SD) years in Groups I and II, respectively, with no significant difference (p=0.945). At the 6th month’s follow-up, group I patients were observed with 94.3% frequency of symptom-free, 85.6% patency of lacrimal drainage system, 68.6% had tear meniscus <0.1mm and 88.6% had positive Fluorescence Dye Disappearance Test (FDDT test). In group II, 91.4% were symptom-free, 68.6% showed patency, 57.1% had a height of tear meniscus < 0.1 mm, and 85.6% had FDDT test positive. But no statistically significant difference was observed (p<.05). Conclusion: The use of Mitomycin-C preoperatively during DCR offers better postoperative outcomes, particularly in maintaining patency and achieving symptom resolution with more FDDT test positive and improvement of tear meniscus in the MMC group than the control group. However, this study didn’t demonstrate a statistically significant difference between the two groups. Further research with larger sample sizes and longer follow-up periods would be beneficial to corroborate these findings.

Keywords: dacryocystorhinostomy, mitomycin-c, dacryocystitis, nasolacrimal duct obstruction

Procedia PDF Downloads 41
1082 Empowering Youth Through Pesh Poultry: A Transformative Approach to Addressing Unemployment and Fostering Sustainable Livelihoods in Busia District, Uganda

Authors: Bisemiire Anthony,

Abstract:

PESH Poultry is a business project proposed specifically to solve unemployment and income-related problems affecting the youths in the Busia district. The project is intended to transform the life of the youth in terms of economic, social and behavioral, as well as the domestic well-being of the community at large. PESH Poultry is a start-up poultry farm that will be engaged in the keeping of poultry birds, broilers and layers for the production of quality and affordable poultry meat and eggs respectively and other poultry derivatives targeting consumers in eastern Uganda, for example, hotels, restaurants, households and bakeries. We intend to use a semi-intensive system of farming, where water and some food are provided in a separate nighttime shelter for the birds; our location will be in Lumino, Busia district. The poultry project will be established and owned by Bisemiire Anthony, Nandera Patience, Naula Justine, Bwire Benjamin and other investors. The farm will be managed and directed by Nandera Patience, who has five years of work experience and business administration knowledge. We will sell poultry products, including poultry eggs, chicken meat, feathers and poultry manure. We also offer consultancy services for poultry farming. Our eggs and chicken meat are hygienic, rich in protein and high quality. We produce processes and packages to meet the standard organization of Uganda and international standards. The business project shall comprise five (5) workers on the key management team who will share various roles and responsibilities in the identified business functions such as marketing, finance and other related poultry farming activities. PESH Poultry seeks 30 million Ugandan shillings in long-term financing to cover start-up costs, equipment, building expenses and working capital. Funding for the launch of the business will be provided primarily by equity from the investors. The business will reach positive cash flow in its first year of operation, allowing for the expected repayment of its loan obligations. Revenue will top UGX 11,750,000, and net income will reach about UGX115 950,000 in the 1st year of operation. The payback period for our project is 2 years and 3 months. The farm plans on starting with 1000 layer birds and 1000 broiler birds, 20 workers in the first year of operation.

Keywords: chicken, pullets, turkey, ducks

Procedia PDF Downloads 80
1081 Transition towards a Market Society: Commodification of Public Health in India and Pakistan

Authors: Mayank Mishra

Abstract:

Market Economy can be broadly defined as economic system where supply and demand regulate the economy and in which decisions pertaining to production, consumption, allocation of resources, price and competition are made by collective actions of individuals or organisations with limited government intervention. On the other hand Market Society is one where instead of the economy being embedded in social relations, social relations are embedded in the economy. A market economy becomes a market society when all of land, labour and capital are commodified. This transition also has effect on people’s attitude and values. Such a transition commence impacting the non-material aspect of life such as public education, public health and the like. The inception of neoliberal policies in non-market norms altered the nature of social goods like public health that raised the following questions. What impact would the transition to a market society make on people in terms of accessibility to public health? Is healthcare a commodity that can be subjected to a competitive market place? What kind of private investments are being made in public health and how do private investments alter the nature of a public good like healthcare? This research problem will employ empirical-analytical approach that includes deductive reasoning which will be using the existing concept of market economy and market society as a foundation for the analytical framework and the hypotheses to be examined. The research also intends to inculcate the naturalistic elements of qualitative methodology which refers to studying of real world situations as they unfold. The research will analyse the existing literature available on the subject. Concomitantly the research intends to access the primary literature which includes reports from the World Bank, World Health Organisation (WHO) and the different departments of respective ministries of the countries for the analysis. This paper endeavours to highlight how the issue of commodification of public health would lead to perpetual increase in its inaccessibility leading to stratification of healthcare services where one can avail the better services depending on the extent of one’s ability to pay. Since the fundamental maxim of private investments is to churn out profits, these kinds of trends would pose a detrimental effect on the society at large perpetuating the lacuna between the have and the have-nots.The increasing private investments, both, domestic and foreign, in public health sector are leading to increasing inaccessibility of public health services. Despite the increase in various public health schemes the quality and impact of government public health services are on a continuous decline.

Keywords: commodity, India and Pakistan, market society, public health

Procedia PDF Downloads 305
1080 Contentious Politics during a Period of Transition to Democracy from an Authoritarian Regime: The Spanish Cycle of Protest of November 1975-December 1978

Authors: Juan Sanmartín Bastida

Abstract:

When a country experiences a period of transition from authoritarianism to democracy, involving an earlier process of political liberalization and a later process of democratization, a cycle of protest usually outbreaks, as there is a reciprocal influence between that kind of political change and the frequency and scale of social protest events. That is what happened in Spain during the first years of its transition to democracy from the Francoist authoritarian regime, roughly between November 1975 and December 1978. Thus, the object of this study is to show and explain how that cycle of protest started, developed, and finished in relation to such a political change, and offer specific information about the main features of all protest cycles: the social movements that arose during that period, the number of protest events by month, the forms of collective action that were utilized, the groups of challengers that engaged in contentious politics, the reaction of the authorities to the action and claims of those groups, etc. The study of this cycle of protest, using the primary sources and analytical tools that characterize the model of research of protest cycles, will make a contribution to the field of contentious politics and its phenomenon of cycles of contention, and more broadly to the political and social history of contemporary Spain. The cycle of protest and the process of political liberalization of the authoritarian regime began around the same time, but the first concluded long before the process of democratization was completed in 1982. The ascending phase of the cycle and therefore the process of liberalization started with the death of Francisco Franco and the proclamation of Juan Carlos I as King of Spain in November 1975; the peak of the cycle was around the first months of 1977; the descending phase started after the first general election of June 1977; and the level of protest stabilized in the last months of 1978, a year that finished with a referendum in which the Spanish people approved the current democratic constitution. It was then when we can consider that the cycle of protest came to an end. The primary sources are the news of protest events and social movements in the three main Spanish newspapers at the time, other written or audiovisual documents, and in-depth interviews; and the analytical tools are the political opportunities that encourage social protest, the available repertoire of contention, the organizations and networks that brought together people with the same claims and allowed them to engage in contentious politics, and the interpretative frames that justify, dignify and motivates their collective action. These are the main four factors that explain the beginning, development and ending of the cycle of protest, and therefore the accompanying social movements and events of collective action. Among those four factors, the political opportunities -their opening, exploitation, and closure-proved to be most decisive.

Keywords: contentious politics, cycles of protest, political opportunities, social movements, Spanish transition to democracy

Procedia PDF Downloads 133
1079 Outreach Intervention Addressing Crack Cocaine Addiction in Users with Co-Occurring Opioid Use Disorder

Authors: Louise Penzenstadler, Tiphaine Robet, Radu Iuga, Daniele Zullino

Abstract:

Context: The outpatient clinic of the psychiatric addiction service of Geneva University Hospital has been providing support to individuals affected by various narcotics for 30 years. However, the increasing consumption of crack cocaine in Geneva has presented a new challenge for the healthcare system. Research Aim: The aim of this research is to evaluate the impact of an outreach intervention on crack cocaine addiction in users with co-occurring opioid use disorder. Methodology: The research utilizes a combination of quantitative and qualitative retrospective data analysis to evaluate the effectiveness of the outreach intervention. Findings: The data collected from October 2023 to December 2023 show that the outreach program successfully made 1,071 contacts with drug users and led to 15 new requests for care and enrollment in treatment. Patients expressed high satisfaction with the intervention, citing easy and rapid access to treatment and social support. Theoretical Importance: This research contributes to the understanding of the challenges and specific needs of a complex group of drug users who face severe health problems. It highlights the importance of outreach interventions in establishing trust, connecting users with care, and facilitating medication-assisted treatment for opioid addiction. Data Collection: Data was collected through the outreach program's interactions with drug users, including street outreach interventions and presence at locations frequented by users. Patient satisfaction surveys were also utilized. Analysis Procedures: The collected data was analyzed using both quantitative and qualitative methods. The quantitative analysis involved examining the number of contacts made, new requests for care, and treatment enrollment. The qualitative analysis focused on patient satisfaction and their perceptions of the intervention. Questions Addressed: The research addresses the following questions: What is the impact of an outreach intervention on crack cocaine addiction in users with co-occurring opioid use disorder? How effective is the outreach program in connecting drug users with care and initiating medication-assisted treatment? Conclusion: The outreach program has proven to be an effective intervention in establishing trust with crack users, connecting them with care, and initiating medication-assisted treatment for opioid addiction. It has also highlighted the importance of addressing the specific challenges faced by this group of drug users.

Keywords: crack addiction, outreach treatment, peer intervention, polydrug use

Procedia PDF Downloads 59
1078 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization

Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries

Abstract:

Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.

Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning

Procedia PDF Downloads 261
1077 Exploring the Relationship Between Past and Present Reviews: The Influence of User Generated Content on Future Hotel Guest Experience Perceptions

Authors: Sacha Joseph-Mathews, Leili Javadpour

Abstract:

In the tourism industry, hoteliers spend millions annually on marketing and positioning efforts for their respective hotels, all in an effort to create a specific image in the minds of the consumer. Yet despite extensive efforts to seduce potential hotel guests with sophisticated advertising messages generated by hotel entities, consumers continue to mistrust corporate branding, preferring instead to place their trust in the reviews of their consumer peers. In today’s complex and cluttered marketplace, online reviews can serve as a mediator for consumers who do not have actual knowledge and experiences with the brand, but are in the process of deciding whether or not to engage in a consumption exercise. Traditionally, consumers have used online reviews as a source of comfort and confirmation of a product/service’s positioning. But today, very few customers make any purchase decisions without first researching existing user reviews, making reviews more of a necessity, rather than a luxury in the purchase decision process. The influence of user generated content (UGC) is amplified in the tourism industry; as more than a third of potential hotel guests will not book a room without first reading a review. As corporate branding becomes less relevant and online reviews become more important, how much of the consumer’s stay expectations are being dictated by existing UGC? Moreover, as hotel guest experience a hotel through the lens of an existing review, how much of their stay and in turn their review, would have been influenced by those reviews that they read? Ultimately, there is the potential for UGC to dictate what potential guests will be most critical about, and or most focused on during their stay. If UGC is a stronger influencer in the purchase decision process than corporate branding, doesn’t it have the potential to dictate, the entire stay experience by influencing the expectations of the guest prior to them arriving on the property? For example, if a hotel is an eco-destination and they focus their branding on their website around sustainability and the retreat nature of the hotel. Yet, guest reviews constantly discuss how dissatisfactory the service and food was with no mention of nature or sustainability, will future reviews then focus primarily on the food? Using text analysis software to examine over 25,000 online reviews, we explore the extent to which new reviews are influenced by wording used in previous reviews for a hotel property, versus content generated by corporate positioning. Additionally, we investigate how distinct hotel related UGC is across different types of tourism destinations. Our findings suggest that UGC can have a greater impact on future reviews, than corporate branding and there is more cohesiveness across UGC of different types of hotel properties than anticipated. A model of User Generated Content Influence is presented and the managerial impact of the power of online reviews to trump corporate branding and shape future user experiences is discussed.

Keywords: user generated content, UGC, corporate branding, online reviews, hotels and tourism

Procedia PDF Downloads 84
1076 Challenges for Competency-Based Learning Design in Primary School Mathematics in Mozambique

Authors: Satoshi Kusaka

Abstract:

The term ‘competency’ is attracting considerable scholarly attention worldwide with the advance of globalization in the 21st century and with the arrival of a knowledge-based society. In the current world environment, familiarity with varied disciplines is regarded to be vital for personal success. The idea of a competency-based educational system was mooted by the ‘Definition and Selection of Competencies (DeSeCo)’ project that was conducted by the Organization for Economic Cooperation and Development (OECD). Further, attention to this topic is not limited to developed countries; it can also be observed in developing countries. For instance, the importance of a competency-based curriculum was mentioned in the ‘2013 Harmonized Curriculum Framework for the East African Community’, which recommends key competencies that should be developed in primary schools. The introduction of such curricula and the reviews of programs are actively being executed, primarily in the East African Community but also in neighboring nations. Taking Mozambique as a case in point, the present paper examines the conception of ‘competency’ as a target of frontline education in developing countries. It also aims to discover the manner in which the syllabus, textbooks and lessons, among other things, in primary-level math education are developed and to determine the challenges faced in the process. This study employs the perspective of competency-based education design to analyze how the term ‘competency’ is defined in the primary-level math syllabus, how it is reflected in the textbooks, and how the lessons are actually developed. ‘Practical competency’ is mentioned in the syllabus, and the description of the term lays emphasis on learners' ability to interactively apply socio-cultural and technical tools, which is one of the key competencies that are advocated in OECD's ‘Definition and Selection of Competencies’ project. However, most of the content of the textbooks pertains to ‘basic academic ability’, and in actual classroom practice, teachers often impart lessons straight from the textbooks. It is clear that the aptitude of teachers and their classroom routines are greatly dependent on the cultivation of their own ‘practical competency’ as it is defined in the syllabus. In other words, there is great divergence between the ‘syllabus’, which is the intended curriculum, and the content of the ‘textbooks’. In fact, the material in the textbooks should serve as the bridge between the syllabus, which forms the guideline, and the lessons, which represent the ‘implemented curriculum’. Moreover, the results obtained from this investigation reveal that the problem can only be resolved through the cultivation of ‘practical competency’ in teachers, which is currently not sufficient.

Keywords: competency, curriculum, mathematics education, Mozambique

Procedia PDF Downloads 184
1075 Policies to Reduce the Demand and Supply of Illicit Drugs in the Latin America: 2004 to 2016

Authors: Ana Caroline Ibrahim Lino, Denise Bomtempo Birche de Carvalho

Abstract:

The background of this research is the international process of control and monitoring of illicit psychoactive substances that has commenced in the early 20th century. This process was intensified with the UN Single Convention on Narcotic Drugs of 1961 and had its culmination in the 1970s with the "War on drugs", a doctrine undertaken by the United States of America. Since then, the phenomenon of drug prohibition has been pushing debates around alternatives of public policies to confront their consequences at a global level and in the specific context of Latin America. Previous research has answered the following key questions: a) With what characteristics and models has the international illicit drug control system consolidated in Latin America with the creation of the Organization of American States (OAS) and the Inter-American Drug Abuse Control Commission (CICAD)? b) What drug policies and programs were determined as guidelines for the member states by the OAS and CICAD? The present paper mainly addresses the analysis of the drug strategies developed by the OAS/CICAD for the Americas from 2004 to 2016. The primary sources have been extracted from the OAS/CICAD documents and reports, listed on the Internet sites of these organizations. Secondary sources refer to bibliographic research on the subject with the following descriptors: illicit drugs, public policies, international organizations, OAS, CICAD, and reducing the demand and supply of illicit drugs. The "content analysis" technique was used to organize the collected material and to choose the axes of analysis. The results show that the policies, strategies, and action plans for Latin America had been focused on anti-drug actions since the creation of the Commission until 2010. The discourses and policies to reduce drug demand and supply were of great importance for solving the problem. However, the real focus was on eliminating the substances by controlling the production, marketing, and distribution of illicit drugs. Little attention was given to the users and their families. The research is of great relevance to the Social Work. The guidelines and parameters of the Social Worker's profession are in line with the need for social, ethical, and political strengthening of any dimension that guarantees the rights of users of psychoactive substances. In addition, it contributed to the understanding of the political, economic, social, and cultural factors that structure the prohibitionism, whose matrix anchors the deprivation of rights and violence.

Keywords: illicit drug policies, international organizations, latin America, prohibitionism, reduce the demand and supply of illicit drugs

Procedia PDF Downloads 155
1074 Influence of Structured Capillary-Porous Coatings on Cryogenic Quenching Efficiency

Authors: Irina P. Starodubtseva, Aleksandr N. Pavlenko

Abstract:

Quenching is a term generally accepted for the process of rapid cooling of a solid that is overheated above the thermodynamic limit of the liquid superheat. The main objective of many previous studies on quenching is to find a way to reduce the total time of the transient process. Computational experiments were performed to simulate quenching by a falling liquid nitrogen film of an extremely overheated vertical copper plate with a structured capillary-porous coating. The coating was produced by directed plasma spraying. Due to the complexities in physical pattern of quenching from chaotic processes to phase transition, the mechanism of heat transfer during quenching is still not sufficiently understood. To our best knowledge, no information exists on when and how the first stable liquid-solid contact occurs and how the local contact area begins to expand. Here we have more models and hypotheses than authentically established facts. The peculiarities of the quench front dynamics and heat transfer in the transient process are studied. The created numerical model determines the quench front velocity and the temperature fields in the heater, varying in space and time. The dynamic pattern of the running quench front obtained numerically satisfactorily correlates with the pattern observed in experiments. Capillary-porous coatings with straight and reverse orientation of crests are investigated. The results show that the cooling rate is influenced by thermal properties of the coating as well as the structure and geometry of the protrusions. The presence of capillary-porous coating significantly affects the dynamics of quenching and reduces the total quenching time more than threefold. This effect is due to the fact that the initialization of a quench front on a plate with a capillary-porous coating occurs at a temperature significantly higher than the thermodynamic limit of the liquid superheat, when a stable solid-liquid contact is thermodynamically impossible. Waves present on the liquid-vapor interface and protrusions on the complex micro-structured surface cause destabilization of the vapor film and the appearance of local liquid-solid micro-contacts even though the average integral surface temperature is much higher than the liquid superheat limit. The reliability of the results is confirmed by direct comparison with experimental data on the quench front velocity, the quench front geometry, and the surface temperature change over time. Knowledge of the quench front velocity and total time of transition process is required for solving practically important problems of nuclear reactors safety.

Keywords: capillary-porous coating, heat transfer, Leidenfrost phenomenon, numerical simulation, quenching

Procedia PDF Downloads 128
1073 Obesity and Cancer: Current Scientific Evidence and Policy Implications

Authors: Martin Wiseman, Rachel Thompson, Panagiota Mitrou, Kate Allen

Abstract:

Since 1997 World Cancer Research Fund (WCRF) International and the American Institute for Cancer Research (AICR) have been at the forefront of synthesising and interpreting the accumulated scientific literature on the link between diet, nutrition, physical activity and cancer, and deriving evidence-based Cancer Prevention Recommendations. The 2007 WCRF/AICR 2nd Expert Report was a landmark in the analysis of evidence linking diet, body weight and physical activity to cancer and led to the establishment of the Continuous Update Project (CUP). In 2018, as part of the CUP, WCRF/AICR will publish a new synthesis of the current evidence and update the Cancer Prevention Recommendations. This will ensure that everyone - from policymakers and health professionals to members of the public - has access to the most up-to-date information on how to reduce the risk of developing cancer. Overweight and obesity play a significant role in cancer risk, and rates of both are increasing in many parts of the world. This session will give an overview of new evidence relating obesity to cancer since the 2007 report. For example, since the 2007 Report, the number of cancers for which obesity is judged to be a contributory cause has increased from seven to eleven. The session will also shed light on the well-established mechanisms underpinning obesity and cancer links. Additionally, the session will provide an overview of diet and physical activity related factors that promote positive energy imbalance, leading to overweight and obesity. Finally, the session will highlight how policy can be used to address overweight and obesity at a population level, using WCRF International’s NOURISHING Framework. NOURISHING formalises a comprehensive package of policies to promote healthy diets and reduce obesity and non-communicable diseases; it is a tool for policymakers to identify where action is needed and assess if an approach is sufficiently comprehensive. The framework brings together ten policy areas across three domains: food environment, food system, and behaviour change communication. The framework is accompanied by a regularly updated database providing an extensive overview of implemented government policy actions from around the world. In conclusion, the session will provide an overview of obesity and cancer, highlighting the links seen in the epidemiology and exploring the mechanisms underpinning these, as well as the influences that help determine overweight and obesity. Finally, the session will illustrate policy approaches that can be taken to reduce overweight and obesity worldwide.

Keywords: overweight, obesity, nutrition, cancer, mechanisms, policy

Procedia PDF Downloads 152
1072 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 122
1071 Experimental Investigation of Hydrogen Addition in the Intake Air of Compressed Engines Running on Biodiesel Blend

Authors: Hendrick Maxil Zárate Rocha, Ricardo da Silva Pereira, Manoel Fernandes Martins Nogueira, Carlos R. Pereira Belchior, Maria Emilia de Lima Tostes

Abstract:

This study investigates experimentally the effects of hydrogen addition in the intake manifold of a diesel generator operating with a 7% biodiesel-diesel oil blend (B7). An experimental apparatus setup was used to conduct performance and emissions tests in a single cylinder, air cooled diesel engine. This setup consisted of a generator set connected to a wirewound resistor load bank that was used to vary engine load. In addition, a flowmeter was used to determine hydrogen volumetric flowrate and a digital anemometer coupled with an air box to measure air flowrate. Furthermore, a digital precision electronic scale was used to measure engine fuel consumption and a gas analyzer was used to determine exhaust gas composition and exhaust gas temperature. A thermopar was installed near the exhaust collection to measure cylinder temperature. In-cylinder pressure was measured using an AVL Indumicro data acquisition system with a piezoelectric pressure sensor. An AVL optical encoder was installed in the crankshaft and synchronized with in-cylinder pressure in real time. The experimental procedure consisted of injecting hydrogen into the engine intake manifold at different mass concentrations of 2,6,8 and 10% of total fuel mass (B7 + hydrogen), which represented energy fractions of 5,15, 20 and 24% of total fuel energy respectively. Due to hydrogen addition, the total amount of fuel energy introduced increased and the generators fuel injection governor prevented any increases of engine speed. Several conclusions can be stated from the test results. A reduction in specific fuel consumption as a function of hydrogen concentration increase was noted. Likewise, carbon dioxide emissions (CO2), carbon monoxide (CO) and unburned hydrocarbons (HC) decreased as hydrogen concentration increased. On the other hand, nitrogen oxides emissions (NOx) increased due to average temperatures inside the cylinder being higher. There was also an increase in peak cylinder pressure and heat release rate inside the cylinder, since the fuel ignition delay was smaller due to hydrogen content increase. All this indicates that hydrogen promotes faster combustion and higher heat release rates and can be an important additive to all kind of fuels used in diesel generators.

Keywords: diesel engine, hydrogen, dual fuel, combustion analysis, performance, emissions

Procedia PDF Downloads 347
1070 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets

Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar

Abstract:

Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).

Keywords: coupled effect, heat transfer, sink, solid rocket motors, source

Procedia PDF Downloads 217
1069 Work Related Musculoskeletal Disorder: A Case Study of Office Computer Users in Nigerian Content Development and Monitoring Board, Yenagoa, Bayelsa State, Nigeria

Authors: Tamadu Perry Egedegu

Abstract:

Rapid growth in the use of electronic data has affected both the employee and work place. Our experience shows that jobs that have multiple risk factors have a greater likelihood of causing Work Related Musculoskeletal Disorder (WRMSDs), depending on the duration, frequency and/or magnitude of exposure to each. The study investigated musculoskeletal disorder among office workers. Thus, it is important that ergonomic risk factors be considered in light of their combined effect in causing or contributing to WRMSDs. Fast technological growth in the use of electronic system; have affected both workers and the work environment. Awkward posture and long hours in front of these visual display terminals can result in work-related musculoskeletal disorders (WRMSD). The study shall contribute to the awareness creation on the causes and consequences of WRMSDs due to lack of ergonomics training. The study was conducted using an observational cross-sectional design. A sample of 109 respondents was drawn from the target population through purposive sampling method. The sources of data were both primary and secondary. Primary data were collected through questionnaires and secondary data were sourced from journals, textbooks, and internet materials. Questionnaires were the main instrument for data collection and were designed in a YES or NO format according to the study objectives. Content validity approval was used to ensure that the variables were adequately covered. The reliability of the instrument was done through test-retest method, yielding a reliability index at 0.84. The data collected from the field were analyzed with a descriptive statistics of chart, percentage and mean. The study found that the most affected body regions were the upper back, followed by the lower back, neck, wrist, shoulder and eyes, while the least affected body parts were the knee calf and the ankle. Furthermore, the prevalence of work-related 'musculoskeletal' malfunctioning was linked with long working hours (6 - 8 hrs.) per day, lack of back support on their seats, glare on the monitor, inadequate regular break, repetitive motion of the upper limbs, and wrist when using the computer. Finally, based on these findings some recommendations were made to reduce the prevalent of WRMSDs among office workers.

Keywords: work related musculoskeletal disorder, Nigeria, office computer users, ergonomic risk factor

Procedia PDF Downloads 233
1068 Enterprises and Social Impact: A Review of the Changing Landscape

Authors: Suzhou Wei, Isobel Cunningham, Laura Bradley McCauley

Abstract:

Social enterprises play a significant role in resolving social issues in the modern world. In contrast to traditional commercial businesses, their main goal is to address social concerns rather than primarily maximize profits. This phenomenon in entrepreneurship is presenting new opportunities and different operating models and resulting in modified approaches to measure success beyond traditional market share and margins. This paper explores social enterprises to clarify their roles and approaches in addressing grand challenges related to social issues. In doing so, it analyses the key differences between traditional business and social enterprises, such as their operating model and value proposition, to understand their contributions to society. The research presented in this paper responds to calls for research to better understand social enterprises and entrepreneurship but also to explore the dynamics between profit-driven and socially-oriented entities to deliver mutual benefits. This paper, which examines the features of commercial business, suggests their primary focus is profit generation, economic growth and innovation. Beyond the chase of profit, it highlights the critical role of innovation typical of successful businesses. This, in turn, promotes economic growth, creates job opportunities and makes a major positive impact on people's lives. In contrast, the motivations upon which social enterprises are founded relate to a commitment to address social problems rather than maximizing profits. These entities combine entrepreneurial principles with commitments to deliver social impact and grand challenge changes, creating a distinctive category within the broader enterprise and entrepreneurship landscape. The motivations for establishing a social enterprise are diverse, such as encompassing personal fulfillment, a genuine desire to contribute to society and a focus on achieving impactful accomplishments. The paper also discusses the collaboration between commercial businesses and social enterprises, which is viewed as a strategic approach to addressing grand challenges more comprehensively and effectively. Finally, this paper highlights the evolving and diverse expectations placed on all businesses to actively contribute to society beyond profit-making. We conclude that there is an unrealized and underdeveloped potential for collaboration between commercial businesses and social enterprises to produce greater and long-lasting social impacts. Overall, the aim of this research is to encourage more investigation of the complex relationship between economic and social objectives and contributions through a better understanding of how and why businesses might address social issues. Ultimately, the paper positions itself as a tool for understanding the evolving landscape of business engagement with social issues and advocates for collaborative efforts to achieve sustainable and impactful outcomes.

Keywords: business, social enterprises, collaboration, social issues, motivations

Procedia PDF Downloads 42
1067 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors

Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami

Abstract:

Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.

Keywords: fault diagnosis, fault location, integrated sensors, PV modules

Procedia PDF Downloads 221
1066 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 112
1065 Development and Experimental Evaluation of a Semiactive Friction Damper

Authors: Juan S. Mantilla, Peter Thomson

Abstract:

Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.

Keywords: earthquake response, friction damper, semiactive control, shaking table

Procedia PDF Downloads 375
1064 The Correlation between Musculoskeletal Disorders and Body Postures during Playing among Guitarists

Authors: Navah Z. Ratzon, Shlomit Cohen, Sigal Portnoy

Abstract:

This work focuses on posture and risk factors for the musculoskeletal disorder in guitarists, which constitutes the largest group of musicians today. The source of the problems experienced by these musicians is linked to physical, psychosocial and personal risk factors. These muscular problems are referred to as Playing Related Musculoskeletal Disorder (PRMD). There is not enough research that specifically studies guitar players, and to the extent of our knowledge, there is almost no reference to the characteristics of their movement patterns while they play. This is in spite of the high prevalence of PRMD in this population. Kinematic research may provide a basis for the development of a prevention plan for this population and their unique characteristics of playing patterns. The aim of the study was to investigate the correlation between risk factors for PRMD among guitar players and self-reporting of pain in the skeletal muscles, and specifically to test whether there are differences in the kinematics of the upper body while playing in a sitting or standing posture. Twenty-five guitarists, aged 18-35, participated in the study. The methods included a motion analysis using a motion capture system, anthropometric measurements and questionnaires relating to risk factors. The questionnaires used were the Standardized Nordic Questionnaire for the Analysis of Musculoskeletal Symptoms and the Demand Control Support Questionnaire, as well as a questionnaire of personal details. All of the study participants complained of musculoskeletal pain in the past year; the most frequent complaints being in the left wrist. Statistically significant correlations were found between biodemographic indices and reports of pain in the past year and the previous week. No significant correlations were found between the physical posture while playing and reports of pain among professional guitarists. However, a difference was found in several kinematic parameters between seated and standing playing postures. In a majority of the joints, the joint angles while playing in a seated position were more extreme than those during standing. This finding may suggest a higher risk for musculoskeletal disorder while playing in a seated position. In conclusion, the results of the present research highlight the prevalence of musculoskeletal problems in guitar players and its correlation with various risk factors. The finding supports the need for intervention in the form of prevention through identifying the risk factors and addressing them. Relating to the person, to their occupation and environment, which are the basis of proper occupational therapy, can help meet this need.

Keywords: body posture, motion tracking, PRMD, guitarists

Procedia PDF Downloads 222
1063 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 197
1062 Changes in Heavy Metals Bioavailability in Manure-Derived Digestates and Subsequent Hydrochars to Be Used as Soil Amendments

Authors: Hellen L. De Castro e Silva, Ana A. Robles Aguilar, Erik Meers

Abstract:

Digestates are residual by-products, rich in nutrients and trace elements, which can be used as organic fertilisers on soils. However, due to the non-digestibility of these elements and reduced dry matter during the anaerobic digestion process, metal concentrations are higher in digestates than in feedstocks, which might hamper their use as fertilisers according to the threshold values of some country policies. Furthermore, there is uncertainty regarding the required assimilated amount of these elements by some crops, which might result in their bioaccumulation. Therefore, further processing of the digestate to obtain safe fertilizing products has been recommended. This research aims to analyze the effect of applying the hydrothermal carbonization process to manure-derived digestates as a thermal treatment to reduce the bioavailability of heavy metals in mono and co-digestates derived from pig manure and maize from contaminated land in France. This study examined pig manure collected from a novel stable system (VeDoWs, province of East Flanders, Belgium) that separates the collection of pig urine and feces, resulting in a solid fraction of manure with high up-concentration of heavy metals and nutrients. Mono-digestion and co-digestion processes were conducted in semi-continuous reactors for 45 days at mesophilic conditions, in which the digestates were dried at 105 °C for 24 hours. Then, hydrothermal carbonization was applied to a 1:10 solid/water ratio to guarantee controlled experimental conditions in different temperatures (180, 200, and 220 °C) and residence times (2 h and 4 h). During the process, the pressure was generated autogenously, and the reactor was cooled down after completing the treatments. The solid and liquid phases were separated through vacuum filtration, in which the solid phase of each treatment -hydrochar- was dried and ground for chemical characterization. Different fractions (exchangeable / adsorbed fraction - F1, carbonates-bound fraction - F2, organic matter-bound fraction - F3, and residual fraction – F4) of some heavy metals (Cd, Cr, Ni, and Cr) have been determined in digestates and derived hydrochars using the modified Community Bureau of Reference (BCR) sequential extraction procedure. The main results indicated a difference in the heavy metals fractionation between digestates and their derived hydrochars; however, the hydrothermal carbonization operating conditions didn’t have remarkable effects on heavy metals partitioning between the hydrochars of the proposed treatments. Based on the estimated potential ecological risk assessment, there was one level decrease (considerate to moderate) when comparing the HMs partitioning in digestates and derived hydrochars.

Keywords: heavy metals, bioavailability, hydrothermal treatment, bio-based fertilisers, agriculture

Procedia PDF Downloads 96
1061 Sources of Precipitation and Hydrograph Components of the Sutri Dhaka Glacier, Western Himalaya

Authors: Ajit Singh, Waliur Rahaman, Parmanand Sharma, Laluraj C. M., Lavkush Patel, Bhanu Pratap, Vinay Kumar Gaddam, Meloth Thamban

Abstract:

The Himalayan glaciers are the potential source of perennial water supply to Asia’s major river systems like the Ganga, Brahmaputra and the Indus. In order to improve our understanding about the source of precipitation and hydrograph components in the interior Himalayan glaciers, it is important to decipher the sources of moisture and their contribution to the glaciers in this river system. In doing so, we conducted an extensive pilot study in a Sutri Dhaka glacier, western Himalaya during 2014-15. To determine the moisture sources, rain, surface snow, ice, and stream meltwater samples were collected and analyzed for stable oxygen (δ¹⁸O) and hydrogen (δD) isotopes. A two-component hydrograph separation was performed for the glacier stream using these isotopes assuming the contribution of rain, groundwater and spring water contribution is negligible based on field studies and available literature. To validate the results obtained from hydrograph separation using above method, snow and ice melt ablation were measured using a network of bamboo stakes and snow pits. The δ¹⁸O and δD in rain samples range from -5.3% to -20.8% and -31.7% to -148.4% respectively. It is noteworthy to observe that the rain samples showed enriched values in the early season (July-August) and progressively get depleted at the end of the season (September). This could be due to the ‘amount effect’. Similarly, old snow samples have shown enriched isotopic values compared to fresh snow. This could because of the sublimation processes operating over the old surface snow. The δ¹⁸O and δD values in glacier ice samples range from -11.6% to -15.7% and -31.7% to -148.4%, whereas in a Sutri Dhaka meltwater stream, it ranges from -12.7% to -16.2% and -82.9% to -112.7% respectively. The mean deuterium excess (d-excess) value in all collected samples exceeds more than 16% which suggests the predominant moisture source of precipitation is from the Western Disturbances. Our detailed estimates of the hydrograph separation of Sutri Dhaka meltwater using isotope hydrograph separation and glaciological field methods agree within their uncertainty; stream meltwater budget is dominated by glaciers ice melt over snowmelt. The present study provides insights into the sources of moisture, controlling mechanism of the isotopic characteristics of Sutri Dhaka glacier water and helps in understanding the snow and ice melt components in Chandra basin, Western Himalaya.

Keywords: D-excess, hydrograph separation, Sutri Dhaka, stable water isotope, western Himalaya

Procedia PDF Downloads 149