Search results for: balanced robot
86 The Usage of Negative Emotive Words in Twitter
Authors: Martina Katalin Szabó, István Üveges
Abstract:
In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.Keywords: gender differences, negative emotive words, semantic changes over time, twitter
Procedia PDF Downloads 20585 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 13684 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton
Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla
Abstract:
An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton
Procedia PDF Downloads 13783 Active Victim Participation in the Criminal Justice System: The Indian Scenario
Authors: Narayani Sepaha
Abstract:
In earlier days, the sufferer was burdened to prove the offence as well as to put the offender to punishment. The adversary system of legal procedure was characterized simply by two parties: the prosecution and the defence. With the onset of this system, firstly the judge started acting as a neutral arbitrator, and secondly, the state inadvertently started assuming the lead role and thereby relegated the victims to the position of oblivion. In this process, with the increasing role of police forces and the government, the victims got systematically excluded from the key stages of the case proceedings and were reduced to the stature of a prosecution witness. This paper tries to emphasise the increasing control over the various stages of the trial, by other stakeholders, leading to the marginalization of victims in the trial process. This monopolization has signalled the onset of an era of gross neglect of victims in the whole criminal justice system. This consciousness led some reformists to raise their concerns over the issue, during the early part of the 20th century. They started supporting the efforts which advocated giving prominence to the participation of victims in the trial process. This paved the way for the evolution of the science of victimology. Markedly the innovativeness to work out facts, seek opinions and statements of the victims and reassure that their voice is also heard has ensured the revival of their rightful roles in the justice delivery system. Many countries, like the US, have set an example by acknowledging the advantages of participation of victims in trials like in the proceedings of the Ariel Castro Kidnappings of Cleveland, Ohio and enacting laws for protecting their rights within the framework of the legal system to ensure speedy and righteous delivery of justice in some of the most complicated cases. An attempt has been made to flag that the accused have several rights in contrast to the near absence of separate laws for victims of crime, in India. It is sad to note that, even in the initial process of registering a crime the victims are subjected to the mercy of the officers in charge and thus begins the silent suffering of these victims, which continues throughout the process of their trial. The paper further contends, that the degree of victim participation in trials and its impact on the outcomes, can be debated and evaluated, but its potential to alter their position and make them regain their lost status cannot be ignored. Victim participation in trial proceedings will help the court in perceiving the facts of the case in a better manner and in arriving at a balanced view of the case. This will not only serve to protect the overall interest of the victims but will act to reinforce the faith in the criminal justice delivery system. It is pertinent to mention that there is an urgent need to review the accused centric prosecution system and introduce appropriate amendments so that the marginalization of victims comes to an end.Keywords: victim participation, criminal justice, India, trial, marginalised
Procedia PDF Downloads 15982 Interlayer-Mechanical Working: Effective Strategy to Mitigate Solidification Cracking in Wire-Arc Additive Manufacturing (WAAM) of Fe-based Shape Memory Alloy
Authors: Soumyajit Koley, Kuladeep Rajamudili, Supriyo Ganguly
Abstract:
In recent years, iron-based shape-memory alloys have been emerging as an inexpensive alternative to costly Ni-Ti alloy and thus considered suitable for many different applications in civil structures. Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy contains 37 wt.% of total solute elements. Such complex multi-component metallurgical system often leads to severe solute segregation and solidification cracking. Wire-arc additive manufacturing (WAAM) of Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy was attempted using a cold-wire fed plasma arc torch attached to a 6-axis robot. Self-standing walls were manufactured. However, multiple vertical cracks were observed after deposition of around 15 layers. Microstructural characterization revealed open surfaces of dendrites inside the crack, confirming these cracks as solidification cracks. Machine hammer peening (MHP) process was adopted on each layer to cold work the newly deposited alloy. Effect of MHP traverse speed were varied systematically to attain a window of operation where cracking was completely stopped. Microstructural and textural analysis were carried out further to correlate the peening process to microstructure.MHP helped in many ways. Firstly, a compressive residual stress was induced on each layer which countered the tensile residual stress evolved from solidification process; thus, reducing net tensile stress on the wall along its length. Secondly, significant local plastic deformation from MHP followed by the thermal cycle induced by deposition of next layer resulted into a recovered and recrystallized equiaxed microstructure instead of long columnar grains along the vertical direction. This microstructural change increased the total crack propagation length and thus, the overall toughness. Thirdly, the inter-layer peening significantly reduced the strong cubic {001} crystallographic texture formed along the build direction. Cubic {001} texture promotes easy separation of planes and easy crack propagation. Thus reduction of cubic texture alleviates the chance of cracking.Keywords: Iron-based shape-memory alloy, wire-arc additive manufacturing, solidification cracking, inter-layer cold working, machine hammer peening
Procedia PDF Downloads 7281 A Doctrinal Research and Review of Hashtag Trademarks
Authors: Hetvi Trivedi
Abstract:
Technological escalation cannot be negated. The same is true for the benefits of technology. However, such escalation has interfered with the traditional theories of protection under Intellectual Property Rights. Out of the many trends that have disrupted the old-school understanding of Intellectual Property Rights, one is hashtags. What began modestly in the year 2007 has now earned a remarkable status, and coupled with the unprecedented rise in social media the hashtag culture has witnessed a monstrous growth. A tiny symbol on the keypad of phones or computers is now a major trend which also serves companies as a critical investment measure in establishing their brand in the market. Due to this a section of the Intellectual Property Rights- Trademarks is undergoing a humungous transformation with hashtags like #icebucket, #tbt or #smilewithacoke, getting trademark protection. So, as the traditional theories of IP take on the modern trends, it is necessary to understand the change and challenge at a theoretical and proportional level and where need be, question the change. Traditionally, Intellectual Property Rights serves the societal need for intellectual productions that ensure its holistic development as well as cultural, economic, social and technological progress. In a two-pronged effort at ensuring continuity of creativity, IPRs recognize the investment of individual efforts that go into creation by way of offering protection. Commonly placed under two major theories- Utilitarian and Natural, IPRs aim to accord protection and recognition to an individual’s creation or invention which serve as an incentive for further creations or inventions, thus fully protecting the creative, inventive or commercial labour invested in the same. In return, the creator by lending the public the access to the creation reaps various benefits. This way Intellectual Property Rights form a ‘social contract’ between the author and society. IPRs are similarly attached to a social function, whereby individual rights must be weighed against competing rights and to the farthest limit possible, both sets of rights must be treated in a balanced manner. To put it differently, both the society and the creator must be put on an equal footing with neither party’s rights subservient to the other. A close look through doctrinal research, at the recent trend of trademark protection, makes the social function of IPRs seem to be moving far from the basic philosophy. Thus, where technology interferes with the philosophies of law, it is important to check and allow such growth only in moderation, for none is superior than the other. The human expansionist nature may need everything under the sky that can be tweaked slightly to be counted and protected as Intellectual Property- like a common parlance word transformed into a hashtag, however IP in order to survive on its philosophies needs to strike a balance. A unanimous global decision on the judicious use of IPR recognition and protection is the need of the hour.Keywords: hashtag trademarks, intellectual property, social function, technology
Procedia PDF Downloads 13180 Development of a Quick On-Site Pass/Fail Test for the Evaluation of Fresh Concrete Destined for Application as Exposed Concrete
Authors: Laura Kupers, Julie Piérard, Niki Cauberg
Abstract:
The use of exposed concrete (sometimes referred to as architectural concrete), keeps gaining popularity. Exposed concrete has the advantage to combine the structural properties of concrete with an aesthetic finish. However, for a successful aesthetic finish, much attention needs to be paid to the execution (formwork, release agent, curing, weather conditions…), the concrete composition (choice of the raw materials and mix proportions) as well as to its fresh properties. For the latter, a simple on-site pass/fail test could halt the casting of concrete not suitable for architectural concrete and thus avoid expensive repairs later. When architects opt for an exposed concrete, they usually want a smooth, uniform and nearly blemish-free surface. For this choice, a standard ‘construction’ concrete does not suffice. An aesthetic surface finishing requires the concrete to contain a minimum content of fines to minimize the risk of segregation and to allow complete filling of more complex shaped formworks. The concrete may neither be too viscous as this makes it more difficult to compact and it increases the risk of blow holes blemishing the surface. On the other hand, too much bleeding may cause color differences on the concrete surface. An easy pass/fail test, which can be performed on the site just before the casting, could avoid these problems. In case the fresh concrete fails the test, the concrete can be rejected. Only in case the fresh concrete passes the test, the concrete would be cast. The pass/fail tests are intended for a concrete with a consistency class S4. Five tests were selected as possible onsite pass/fail test. Two of these tests already exist: the K-slump test (ASTM C1362) and the Bauer Filter Press Test. The remaining three tests were developed by the BBRI in order to test the segregation resistance of fresh concrete on site: the ‘dynamic sieve stability test’, the ‘inverted cone test’ and an adapted ‘visual stability index’ (VSI) for the slump and flow test. These tests were inspired by existing tests for self-compacting concrete, for which the segregation resistance is of great importance. The suitability of the fresh concrete mixtures was also tested by means of a laboratory reference test (resistance to segregation) and by visual inspection (blow holes, structure…) of small test walls. More than fifteen concrete mixtures of different quality were tested. The results of the pass/fail tests were compared with the results of this laboratory reference test and the test walls. The preliminary laboratory results indicate that concrete mixtures ‘suitable’ for placing as exposed concrete (containing sufficient fines, a balanced grading curve etc.) can be distinguished from ‘inferior’ concrete mixtures. Additional laboratory tests, as well as tests on site, will be conducted to confirm these preliminary results and to set appropriate pass/fail values.Keywords: exposed concrete, testing fresh concrete, segregation resistance, bleeding, consistency
Procedia PDF Downloads 42379 Investigating Secondary Students’ Attitude towards Learning English
Authors: Pinkey Yaqub
Abstract:
The aim of this study was to investigate secondary (grades IX and X) students’ attitudes towards learning the English language based on the medium of instruction of the school, the gender of the students and the grade level in which they studied. A further aim was to determine students’ proficiency in the English language according to their gender, the grade level and the medium of instruction of the school. A survey was used to investigate the attitudes of secondary students towards English language learning. Simple random sampling was employed to obtain a representative sample of the target population for the research study as a comprehensive list of established English medium schools, and newly established English medium schools were available. A questionnaire ‘Attitude towards English Language Learning’ (AtELL) was adapted from a research study on Libyan secondary school students’ attitudes towards learning English language. AtELL was reviewed by experts (n=6) and later piloted on a representative sample of secondary students (n= 160). Subsequently, the questionnaire was modified - based on the reviewers’ feedback and lessons learnt during the piloting phase - and directly administered to students of grades 9 and 10 to gather information regarding their attitudes towards learning the English language. Data collection spanned a month and a half. As the data were not normally distributed, the researcher used Mann-Whitney tests to test the hypotheses formulated to investigate students’ attitudes towards learning English as well as proficiency in the language across the medium of instruction of the school, the gender of the students and the grade level of the respondents. Statistical analyses of the data showed that the students of established English medium schools exhibited a positive outlook towards English language learning in terms of the behavioural, cognitive and emotional aspects of attitude. A significant difference was observed in the attitudes of male and female students towards learning English where females showed a more positive attitude in terms of behavioural, cognitive and emotional aspects as compared to their male counterparts. Moreover, grade 10 students had a more positive attitude towards learning English language in terms of behavioural, cognitive and emotional aspects as compared to grade 9 students. Nonetheless, students of newly established English medium schools were more proficient in English as gauged by their examination scores in this subject as compared to their counterparts studying in established English medium schools. Moreover, female students were more proficient in English while students studying in grade 9 were less proficient in English than their seniors studying in grade 10. The findings of this research provide empirical evidence to future researchers wishing to explore the relationship between attitudes towards learning language and variables such as the medium of instruction of the school, gender and the grade level of the students. Furthermore, policymakers might revisit the English curriculum to formulate specific guidelines that promote a positive and gender-balanced outlook towards learning English for male and female students.Keywords: attitude, behavioral aspect of attitude, cognitive aspect of attitude, emotional aspect of attitude
Procedia PDF Downloads 22878 Making the Invisible Visible: Exploring Immersion Teacher Perceptions of Online Content and Language Integrated Learning Professional Development Experiences
Authors: T. J. O Ceallaigh
Abstract:
Subject matter driven programs such as immersion programs are increasingly popular across the world. These programs have allowed for extensive experimentation in the realm of second language teaching and learning and have been at the centre of many research agendas since their inception. Even though immersion programs are successful, especially in terms of second language development, they remain complex to implement and not always as successful as what we would hope them to be. Among all the challenges these varied programs face, research indicates that the primary issue lies in the difficulty to create well-balanced programs where both content instruction and language/literacy instruction can be targeted simultaneously. Initial teacher education and professional development experiences are key drivers of successful language immersion education globally. They are critical to the supply of teachers with the mandatory linguistic and cultural competencies as well as associated pedagogical practices required to ensure learners’ success. However, there is a significant dearth of research on professional development experiences of immersion teachers. We lack an understanding of the nature of their expertise and their needs in terms of professional development as well as their perceptions of the primary challenges they face as they attempt to formulate a coherent pedagogy of integrated language and content instruction. Such an understanding is essential if their specific needs are to be addressed appropriately and thus improve the overall quality of immersion programs. This paper reports on immersion teacher perceptions of online professional development experiences that have a positive impact on their ability to facilitate language and content connections in instruction. Twenty Irish-medium immersion teachers engaged in the instructional integration of language and content in a systematic and developmental way during a year-long online professional development program. Data were collected from a variety of sources e.g., an extensive online questionnaire, individual interviews, reflections, assignments and focus groups. This study provides compelling evidence of the potential of online professional development experiences as a pedagogical framework for understanding the complex and interconnected knowledge demands that arise in content and language integration in immersion. Findings illustrate several points of access to classroom research and pedagogy and uncover core aspects of high impact online experiences. Teachers identified aspects such as experimentation and risk-taking, authenticity and relevance, collegiality and collaboration, motivation and challenge and teacher empowerment. The potential of the online experiences to foster teacher language awareness was also identified as a contributory factor to success. The paper will conclude with implications for designing meaningful and effective online CLIL professional development experiences.Keywords: content and language integrated learning , immersion pedagogy, professional development, teacher language awareness
Procedia PDF Downloads 18577 Optimizing Weight Loss with AI (GenAISᵀᴹ): A Randomized Trial of Dietary Supplement Prescriptions in Obese Patients
Authors: Evgeny Pokushalov, Andrey Ponomarenko, John Smith, Michael Johnson, Claire Garcia, Inessa Pak, Evgenya Shrainer, Dmitry Kudlay, Sevda Bayramova, Richard Miller
Abstract:
Background: Obesity is a complex, multifactorial chronic disease that poses significant health risks. Recent advancements in artificial intelligence (AI) offer the potential for more personalized and effective dietary supplement (DS) regimens to promote weight loss. This study aimed to evaluate the efficacy of AI-guided DS prescriptions compared to standard physician-guided DS prescriptions in obese patients. Methods: This randomized, parallel-group pilot study enrolled 60 individuals aged 40 to 60 years with a body mass index (BMI) of 25 or greater. Participants were randomized to receive either AI-guided DS prescriptions (n = 30) or physician-guided DS prescriptions (n = 30) for 180 days. The primary endpoints were the percentage change in body weight and the proportion of participants achieving a ≥5% weight reduction. Secondary endpoints included changes in BMI, fat mass, visceral fat rating, systolic and diastolic blood pressure, lipid profiles, fasting plasma glucose, hsCRP levels, and postprandial appetite ratings. Adverse events were monitored throughout the study. Results: Both groups were well balanced in terms of baseline characteristics. Significant weight loss was observed in the AI-guided group, with a mean reduction of -12.3% (95% CI: -13.1 to -11.5%) compared to -7.2% (95% CI: -8.1 to -6.3%) in the physician-guided group, resulting in a treatment difference of -5.1% (95% CI: -6.4 to -3.8%; p < 0.01). At day 180, 84.7% of the AI-guided group achieved a weight reduction of ≥5%, compared to 54.5% in the physician-guided group (Odds Ratio: 4.3; 95% CI: 3.1 to 5.9; p < 0.01). Significant improvements were also observed in BMI, fat mass, and visceral fat rating in the AI-guided group (p < 0.01 for all). Postprandial appetite suppression was greater in the AI-guided group, with significant reductions in hunger and prospective food consumption, and increases in fullness and satiety (p < 0.01 for all). Adverse events were generally mild-to-moderate, with higher incidences of gastrointestinal symptoms in the AI-guided group, but these were manageable and did not impact adherence. Conclusion: The AI-guided dietary supplement regimen was more effective in promoting weight loss, improving body composition, and suppressing appetite compared to the physician-guided regimen. These findings suggest that AI-guided, personalized supplement prescriptions could offer a more effective approach to managing obesity. Further research with larger sample sizes is warranted to confirm these results and optimize AI-based interventions for weight loss.Keywords: obesity, AI-guided, dietary supplements, weight loss, personalized medicine, metabolic health, appetite suppression
Procedia PDF Downloads 876 Antiplatelets and Anticoagulants in Rural Emergency General Surgery
Authors: Jeong-Moh John Yahng, Angelika Na
Abstract:
Introduction: Increasing numbers of general surgical patients are being prescribed antiplatelet and anticoagulant medications (APAC) for various cardiovascular and cerebrovascular conditions. Surgical patients who are on APAC present a management challenge as bleeding risk needs to be balanced with thromboembolic risk. Although guidelines exist in regards to APAC management in elective surgery, there is a lack of guidelines in the emergency surgery setting. In this study we aim to characterise APAC usage in emergency general surgical patients admitted to a rural hospital. We also assess the impact of APAC usage on clinical management of these patients. Methods: Prospective study of emergency general surgical admissions at Northeast Health Wangaratta (Victoria) from 2 July to 25 Oct 2014. Questionnaire collected demographics data, admission diagnosis, APAC usage, anaesthesia techniques, operation types, transfusion requirement and morbidity / mortality data. Results: During the 4 month study, 118 patients were classified into two groups: non-APAC (n=96, 81%) and APAC (n=22, 19%). Patients in the APAC group were older compared to the non-APAC patients (mean age 72 vs 42 years old). Amongst patients younger than 60 years old, only 1% of them were on APAC. In contrast, 49% of patients older than 60 years old were on APAC (p<0.001). Patients who were admitted with a bleeding problem were more likely to be on APAC (p<0.05). 19% of emergency general surgery patients were on APAC. The majority (91%) of them were on antiplatelet medication, with two patients being on dual antiplatelet agents (aspirin + clopidogrel or ticagrelor). 15% of emergency general surgical patients requiring operations were on APAC. 11% of all laparotomies and 33% of gastroscopy for haematemesis/melaena patients were on APAC. Both of the patients operated for bleeding following surgery at another hospital were in the APAC group. In regards to impact on clinical management, 59% of APAC patients had their medications interrupted or ceased, on average by 3.5 days (range 1-13 days). 2 out of 75 operations were delayed due to APAC usage. There was no difference in the use of central venous or arterial line for increased monitoring (p=0.14) or in the use of warming blanket (Bair Hugger™) (p=0.94). Overall, transfusion rate was higher amongst APAC patients (14% vs 3%) (p 0.04). The recorded morbidity (n=2) and mortality (n=1) in this study were all in the APAC group. Discussion: Nineteen percent of emergency general surgical admissions and fifteen percent of operated patients were on APAC. The prevalence of APAC usage was higher in those aged sixty and above. General surgical patients who were admitted with a bleeding problem were more likely to be on APAC. Two patients who were operated for bleeding following surgery at another hospital were in the APAC group. Note that there was no patient in the non-APAC group who was admitted for post-operative bleeding. We observed two cases in which operation was delayed due to APAC usage. Transfusion, morbidity and mortality rate were higher in the APAC group. Conclusion: In this study, nineteen percent of emergency general surgical admissions were on APAC. The use of APAC is more prevalent in the older age group, particularly those aged sixty and above. Higher proportion of APAC compared to non-APAC patients were admitted and operated for bleeding problems. There is an urgent need for clinical guidelines regarding APAC management in emergency general surgical patients.Keywords: antiplatelet, anticoagulants, emergency general surgery, rural general surgery, morbidity, mortality
Procedia PDF Downloads 13475 Effect of Internet Addiction on Dietary Behavior and Lifestyle Characteristics among University Students
Authors: Hafsa Kamran, Asma Afreen, Zaheer Ahmed
Abstract:
Internet addiction, an emerging mental health disorder from last two decades, is manifested by the inability in the controlled use of internet leading to academics, social, physiological and/or psychological difficulties. The present study aimed to assess the levels of internet addiction among university students in Lahore and to explore the effects of internet addiction on their dietary behavior and lifestyle. It was an analytical cross-sectional study. Data was collected from October to December 2016 from students of four universities selected through two-stage sampling method. The numbers of participants were 500 and 13 questionnaires were rejected due to incomplete information. Levels of Internet Addiction (IA) were calculated using Young Internet Addiction Test (YIAT). Data was also collected on students’ demographics, lifestyle factors and dietary behavior using self-reported questionnaire. Data was analyzed using SPSS (version 21). Chi-square test was applied to evaluate the relationship between variables. Results of the study revealed that 10% of the population had severe internet addiction while moderate Internet Addiction was present in 42%. High prevalence was found among males (11% vs. 8%), private sector university students (p = 0.008) and engineering students (p = 0.000). The lifestyle habits of internet addicts were significantly of poorer quality than normal users (p = 0.05). Internet addiction was found associated with lesser physically activity (p = 0.025), had shorter duration of physical activity (p = 0.016), had more disorganized sleep pattern (p = 0.023), had less duration of sleep (p = 0.019), reported being more tired and sleepy in class (p = 0.033) and spending more time on internet as compared to normal users. Severe and moderate internet addicts also found to be more overweight and obese than normal users (p = 0.000). The dietary behavior of internet addicts was significantly poorer than normal users. Internet addicts were found to skip breakfast more than a normal user (p = 0.039). Common reasons for meal skipping were lack of time and snacking between meals (p = 0.000). They also had increased meal size (p = 0.05) and habit of snacking while using the internet (p = 0.027). Fast food (p = 0.016) and fried items (p = 0.05) were most consumed snacks, while carbonated beverages (p = 0.019) were most consumed beverages among internet addicts. Internet Addicts were found to consume less than recommended daily servings of dairy (p = 0.008) and fruits (p = 0.000) and more servings of meat group (p = 0.025) than their no internet addict counterparts. In conclusion, in this study, it was demonstrated that internet addicts have unhealthy dietary behavior and inappropriate lifestyle habits. University students should be educated regarding the importance of balanced diet and healthy lifestyle, which are critical for effectual primary prevention of numerous chronic degenerative diseases. Furthermore, it is necessary to raise awareness concerning adverse effects of internet addiction among youth and their parents.Keywords: dietary behavior, internet addiction, lifestyle, university students
Procedia PDF Downloads 20174 Industrial Waste Multi-Metal Ion Exchange
Authors: Thomas S. Abia II
Abstract:
Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese
Procedia PDF Downloads 14373 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test
Authors: Arvind Salem
Abstract:
Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.Keywords: commissions, elections, gerrymandering, redistricting
Procedia PDF Downloads 7372 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy
Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang
Abstract:
In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties
Procedia PDF Downloads 15571 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 1970 The Strategic Gas Aggregator: A Key Legal Intervention in an Evolving Nigerian Natural Gas Sector
Authors: Olanrewaju Aladeitan, Obiageli Phina Anaghara-Uzor
Abstract:
Despite the abundance of natural gas deposits in Nigeria and the immense potential, this presents both for the domestic and export oriented revenue, there exists an imbalance in the preference for export as against the development and optimal utilization of natural gas for the domestic industry. Considerable amounts of gas are still being wasted by flaring in the country to this day. Although the government has set in place initiatives to harness gas at the flare and thereby reduce volumes flared, the gas producers would rather direct the gas produced to the export market whereas gas apportioned to the domestic market is often marred by the low domestic gas price which is often discouraging to the gas producers. The exported fraction of gas production no doubt yields healthy revenues for the government and an encouraging return on investment for the gas producers and for this reason export sales remain enticing and preferable to the domestic sale of gas. This export pull impacts negatively if left unchecked, on the domestic market which is in no position to match the price at the international markets. The issue of gas price remains critical to the optimal development of the domestic gas industry, in that it comprises the basis for investment decisions of the producers on the allocation of their scarce resources and to what project to channel their output in order to maximize profit. In order then to rebalance the domestic industry and streamline the market for gas, the Gas Aggregation Company of Nigeria, also known as the Strategic Aggregator was proposed under the Nigerian Gas Master Plan of 2008 and then established pursuant to the National Gas Supply and Pricing Regulations of 2008 to implement the domestic gas supply obligation which focuses on ramping-up gas volumes for domestic utilization by mandatorily requiring each gas producer to dedicate a portion of its gas production for domestic utilization before having recourse to the export market. The 2008 Regulations further stipulate penalties in the event of non-compliance. This study, in the main, assesses the adequacy of the legal framework for the Nigerian Gas Industry, given that the operational laws are structured more for oil than its gas counterpart; examine the legal basis for the Strategic Aggregator in the light of the Domestic Gas Supply and Pricing Policy 2008 and the National Domestic Gas Supply and Pricing Regulations 2008 and makes a case for a review of the pivotal role of the Aggregator in the Nigerian Gas market. In undertaking this assessment, the doctrinal research methodology was adopted. Findings from research conducted reveal the reawakening of the Federal Government to the immense potential of its gas industry as a critical sector of its economy and the need for a sustainable domestic natural gas market. A case for the review of the ownership structure of the Aggregator to comprise a balanced mix of the Federal Government, gas producers and other key stakeholders in order to ensure the effective implementation of the domestic supply obligations becomes all the more imperative.Keywords: domestic supply obligations, natural gas, Nigerian gas sector, strategic gas aggregator
Procedia PDF Downloads 22469 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 8068 Land Transfer for New Township and Its Impact from Dwellers' Point of View: A Case Study of New Town Kolkata
Authors: Subhra Chattopadhyay
Abstract:
New Towns are usually built up at city-periphery with an eye to accommodate overspill population and functions of the city. ‘New towns are self-sufficient planned towns having a full range of urban economic and social activities, so it can provide employments for all of its inhabitants as well as a balanced self-content social community could be maintained’. In 3rd world countries New towns often emerge from scratch i.e on the area having no urban background and therefore, it needs a massive land conversion from rural to urban. This paper aims to study the implication of such land title transfer into rural sustainability with a case study at Jatragachi, New Town Kolkata. Broad objectives of this study are to understand 1. new changes in this area like i)changes in land use, ii) demographic changes, iii) occupational changes of the local people and 2.their view about new town planning. Major observations are stated below. The studied area was completely rural till recent years and is now at the heart of New Town Kolkata. Though this area is now under the jurisdiction of New Town Kolkata Development Authority (NKDA), it is still administrated by rural self-government.It creates administrative confusion and misuse of public capital. It is observed in this study that cultivation was the mainstay of livelihood for the majority of residents till recent past. There was a dramatic rise in irrigated area in the decade of 90’s pointing out agricultural prosperity.The area achieved the highest productivity of rice in the District. Percentage of marginal workers dropped significantly.In addition to it, ascending women’s literacy rate as found in this rural Mouza obviously indicates a constant social progress .Through land conversion, this flourishing agricultural land has been transformed into urban area with highly sophisticated uses. Such development may satisfy educated urban elite but the dwellers of the area suffer a lot. They bear the cost of new town planning through loss of their assured food and income as well as their place identity. The number of marginal workers increases abruptly. The growth of female literacy drops down. The area loses its functional linkages with its surroundings and fails to prove its actual growth potentiality. The physical linkages( like past roads and irrigation infrastructure) which had developed through time to support the economy become defunct. The ecological services which were provided by the agricultural field are denied. The historicity of this original site is demolished. Losses of the inhabitants of the area who have been evicted are also immense and cannot be materially compensated. Therefore, the ethos of such new town planning in stake of rural sustainability is under question. Need for an integrated approach for rural and urban development planning is felt in this study.Keywords: new town, sustainable development, growth potentiality, land transfer
Procedia PDF Downloads 31167 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling
Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani
Abstract:
The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.Keywords: material point method, woven fabric composites, forming, material handling
Procedia PDF Downloads 18166 Effects of Dietary Polyunsaturated Fatty Acids and Beta Glucan on Maturity, Immunity and Fry Quality of Pabdah Catfish, Ompok pabda
Authors: Zakir Hossain, Md. Saddam Hossain
Abstract:
A nutritionally balanced diet and selection of appropriate species are important criteria in aquaculture. The present study was conducted to evaluate the effects of polyunsaturated fatty acids (PUFAs) and beta glucan containing diet on growth performance, feed utilization, maturation, immunity, early embryonic and larval development of endangered Pabdah catfish, Ompok pabda. In this study, squid extracted lipids and mushroom powder were used as the source of PUFAs and beta glucan, respectively, and formulated two isonitrogenous diets such as basal or control (CON) diet and treated (PBG) diet with maintaining 30% protein levels. During the study period, similar physicochemical conditions of water such as temperature, pH, and dissolved oxygen (DO) were 26.5±2 °C, 7.4±0.2, and 6.7±0.5 ppm, respectively in each cistern. The results showed that final mean body weight, final mean length gain, food conversion ratio (FCR), specific growth rate (SGR), food conversion efficiency (%), hepatosomatic index (HSI), kidney index (KI), and viscerosomatic index (VSI) were significantly (P<0.01 and P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet. The length-weight relationship and relative condition factor (K) of O. pabda were significantly (P<0.05) affected by the PBG diet. The gonadosomatic index (GSI), sperm viability, blood serum calcium ion concentrations (Ca²⁺), and vitellogenin level were significantly (P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet; which was used to the indication of fish maturation. During the spawning season, lipid granules and normal morphological structure were observed in the treated fish liver, whereas fewer lipid granules of liver were observed in the control group. Based on the immunity and stress resistance-related parameters such as hematological indices, antioxidant activity, lysozyme level, respiratory burst activity, blood reactive oxygen species (ROS), complement activity (ACH50 assay), specific IgM, brain AChE, plasma PGOT, and PGPT enzyme activity were significantly (P<0.01 and P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet. The fecundity, fertilization rate (92.23±2.69%), hatching rate (87.43±2.17 %) and survival (76.62±0.82%) of offspring were significantly higher (P˂0.05) in the PBG diet than in the control. Consequently, early embryonic and larval development was better in PBG treated group than in the control. Therefore, the present study showed that the polyunsaturated fatty acids (PUFAs) and beta glucan enriched experimental diet were more effective and achieved better growth, feed utilization, maturation, immunity, and spawning performances of O. pabda.Keywords: polyunsaturated fatty acids, beta glucan, maturity, immunity, catfish
Procedia PDF Downloads 565 Epidemiological Data of Schistosoma haematobium Bilharzia in Rural and Urban Localities in the Republic of Congo
Authors: Jean Akiana, Digne Merveille Nganga Bouanga, Nardiouf Sjelin Nsana, Wilfrid Sapromet Ngoubili, Chyvanelle Ndous Akiridzo, Vishnou Reize Ampiri, Henri-Joseph Parra, Florence Fenollar, Didier Raoult, Oleg Mediannikov, Cheikh Sadhibou Sokhna
Abstract:
Schistosoma haematobium schistosomiasis is an endemic disease in which the level of human exposure, incidence, and fatality attributed to it remains, unfortunately, high worldwide. The erection of hydroelectric infrastructures constitute a major factor in the emergence of this disease. In the context of the Republic of the Congo, which considers industrialization and modernization as two essential pillars of development, building the hydroelectric dams of Liouesso (19 Mw) and the feasibility studies of the dams of Chollet (600MW) in the Sangha, of Sounda (1000MW) in Kouilou and Kouembali (150MW) on Lefini is necessary to increase the country's energy capacities. Likewise, the urbanization of former endemic localities should take into account the maintenance of contamination points. However, health impact studies on schistosomiasis epidemiology in general and urinary bilharzia, in particular, have never been carried out in these areas, neither before nor after the erection of those dams. Participants benefited from an investigative questionnaire, urinalysis both by dipstick and urine filtrate examined under a microscope. Assessment of the genetic diversity of schistosoma species populations was considered as well as PCR analysis to confirm the test strip and microscopy tests. 405 participants were registered in five localities. The sampling was made up of a balanced population in terms of male/female ratio, which is around 1. The prevalence rate was 45% (55/123) in Nkayi, 10.40% (11/106) in Loudima, 1 case in Mbomo (West Cuvette), which would probably be imported, zero in Liouesso and Kabo. The highest oviuria (number of eggs per volume of urine) is 150 S. haematobium eggs/10ml in Nkayi, apart from the case of imported Mbomo, imported from Gabon, which has 160 S. haematobium eggs/10ml. The lowest oviuria was 2 S. haematobium eggs/10ml. Prevalence rates are still high in semi-urban areas (Nkayi). As praziquantel treatments are available and effective, it is important to step up mass treatment campaigns in high risk areas already largely initiated by the National Schistosomiasis Control Program. Prevalence rates are still high in semi-urban areas (Nkayi). As praziquantel treatments are available and effective, it is important to step up mass treatment campaigns in high risk areas already largely initiated by the National Schistosomiasis Control Program.Keywords: Bilharzia, Schistosoma haematobium, oviuria, urbanization, Congo
Procedia PDF Downloads 14964 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management
Authors: Michael McCann
Abstract:
Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.Keywords: corporate governance, boards of directors, agency theory, earnings management
Procedia PDF Downloads 23363 Qualitative Characterization of Proteins in Common and Quality Protein Maize Corn by Mass Spectrometry
Authors: Benito Minjarez, Jesse Haramati, Yury Rodriguez-Yanez, Florencio Recendiz-Hurtado, Juan-Pedro Luna-Arias, Salvador Mena-Munguia
Abstract:
During the last decades, the world has experienced a rapid industrialization and an expanding economy favoring a demographic boom. As a consequence, countries around the world have focused on developing new strategies related to the production of different farm products in order to meet future demands. Consequently, different strategies have been developed seeking to improve the major food products for both humans and livestock. Corn, after wheat and rice, is the third most important crop globally and is the primary food source for both humans and livestock in many regions around the globe. In addition, maize (Zea mays) is an important source of protein accounting for up to 60% of the daily human protein supply. Generally, many of the cereal grains have proteins with relatively low nutritional value, when they are compared with proteins from meat. In the case of corn, much of the protein is found in the endosperm (75 to 85%) and is deficient in two essential amino acids, lysine, and tryptophan. This deficiency results in an imbalance of amino acids and low protein content; normal maize varieties have less than half of the recommended amino acids for human nutrition. In addition, studies have shown that this deficiency has been associated with symptoms of growth impairment, anemia, hypoproteinemia, and fatty liver. Due to the fact that most of the presently available maize varieties do not contain the quality and quantity of proteins necessary for a balanced diet, different countries have focused on the research of quality protein maize (QPM). Researchers have characterized QPM noting that these varieties may contain between 70 to 100% more residues of the amino acids essential for animal and human nutrition, lysine, and tryptophan, than common corn. Several countries in Africa, Latin America, as well as China, have incorporated QPM in their agricultural development plan. Large parts of these countries have chosen a specific QPM variety based on their local needs and climate. Reviews have described the breeding methods of maize and have revealed the lack of studies on genetic and proteomic diversity of proteins in QPM varieties, and their genetic relationships with normal maize varieties. Therefore, molecular marker identification using tools such as mass spectrometry may accelerate the selection of plants that carry the desired proteins with high lysine and tryptophan concentration. To date, QPM maize lines have played a very important role in alleviating the malnutrition, and better characterization of these lines would provide a valuable nutritional enhancement for use in the resource-poor regions of the world. Thus, the objectives of this study were to identify proteins in QPM maize in comparison with a common maize line as a control.Keywords: corn, mass spectrometry, QPM, tryptophan
Procedia PDF Downloads 28862 Microalgae Technology for Nutraceuticals
Authors: Weixing Tan
Abstract:
Production of nutraceuticals from microalgae—a virtually untapped natural phyto-based source of which there are 200,000 to 1,000,000 species—offers a sustainable and healthy alternative to conventionally sourced nutraceuticals for the market. Microalgae can be grown organically using only natural sunlight, water and nutrients at an extremely fast rate, e.g. 10-100 times more efficiently than crops or trees. However, the commercial success of microalgae products at scale remains limited largely due to the lack of economically viable technologies. There are two major microalgae production systems or technologies currently available: 1) the open system as represented by open pond technology and 2) the closed system such as photobioreactors (PBR). Each carries its own unique features and challenges. Although an open system requires a lower initial capital investment relative to a PBR, it conveys many unavoidable drawbacks; for example, much lower productivity, difficulty in contamination control/cleaning, inconsistent product quality, inconvenience in automation, restriction in location selection, and unsuitability for cold areas – all directly linked to the system openness and flat underground design. On the other hand, a PBR system has characteristics almost entirely opposite to the open system, such as higher initial capital investment, better productivity, better contamination and environmental control, wider suitability in different climates, ease in automation, higher and consistent product quality, higher energy demand (particularly if using artificial lights), and variable operational expenses if not automated. Although closed systems like PBRs are not highly competitive yet in current nutraceutical supply market, technological advances can be made, in particular for the PBR technology, to narrow the gap significantly. One example is a readily scalable P2P Microalgae PBR Technology at Grande Prairie Regional College, Canada, developed over 11 years considering return on investment (ROI) for key production processes. The P2P PBR system is approaching economic viability at a pre-commercial stage due to five ROI-integrated major components. They include: (1) optimum use of free sunlight through attenuation (patented); (2) simple, economical, and chemical-free harvesting (patent ready to file); (3) optimum pH- and nutrient-balanced culture medium (published), (4) reliable water and nutrient recycling system (trade secret); and (5) low-cost automated system design (trade secret). These innovations have allowed P2P Microalgae Technology to increase daily yield to 106 g/m2/day of Chlorella vulgaris, which contains 50% proteins and 2-3% omega-3. Based on the current market prices and scale-up factors, this P2P PBR system presents as a promising microalgae technology for market competitive nutraceutical supply.Keywords: microalgae technology, nutraceuticals, open pond, photobioreactor PBR, return on investment ROI, technological advances
Procedia PDF Downloads 15761 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 11260 A Failure to Strike a Balance: The Use of Parental Mediation Strategies by Foster Carers and Social Workers
Authors: Jennifer E Simpson
Abstract:
Background and purpose: The ubiquitous use of the Internet and social media by children and young people has had a dual effect. The first is to open a world of possibilities and promise that is characterized by the ability to consume and create content, connect with friends, explore and experiment. The second relates to risks such as unsolicited requests, sexual exploitation, cyberbullying and commercial exploitation. This duality poses significant difficulties for a generation of foster carers and social workers who have no childhood experience to draw on in terms of growing up using the Internet, social media and digital devices. This presentation is concerned with the findings of a small qualitative study about the use of digital devices and the Internet by care-experienced young people to stay in touch with their families and the way this was managed by foster carers and social workers using specific parental mediation strategies. The findings highlight that restrictive strategies were used by foster carers and endorsed by social workers. An argument is made for an approach that develops a series of balanced solutions that move foster carers from such restrictive approaches to those that are grounded in co-use and are interpretive in nature. Methods: Using a purposive sampling strategy, 12 triads consisting of care-experienced young people (aged 13-18 years), their foster carers and allocated social workers were recruited. All respondents undertook a semi-structured interview, with the young people detailing what social media apps and other devices they used to contact their families via an Ecomap. The foster carers and social workers shared details of the methods and approaches they used to manage digital devices and the Internet in general. Data analysis was performed using a Framework analytic method to explore the various attitudes, as well as complementary and contradictory perspectives of the young people, their foster carers and allocated social workers. Findings: The majority of foster carers made use of parental mediation strategies that erred on the side of typologies that included setting rules and regulations (restrictive), ad-hoc checking of a young person’s behavior and device (monitoring), and software used to limit or block access to inappropriate websites (technical). It was noted that minimal use was made by foster carers of parental mediation strategies that included talking about content (active/interpretive) or sharing Internet activities (co-use). Amongst the majority of the social workers, they also had a strong preference for restrictive approaches. Conclusions and implications: Trepidations on the part of both foster carers and social workers about the use of digital devices and the Internet meant that the parental strategies used were weighted more towards restriction, with little use made of approaches such as co-use and interpretative. This lack of balance calls for solutions that are grounded in co-use and an interpretive approach, both of which can be achieved through training and support, as well as wider policy change.Keywords: parental mediation strategies, risk, children in state care, online safety
Procedia PDF Downloads 7359 Malaysia as a Case Study for Climate Policy Integration into Energy Policy
Authors: Marcus Lee
Abstract:
The energy sector is the largest contributor of greenhouse gas emissions in Malaysia, which induces climate change. The climate change problem is therefore an energy sector problem. Tackling climate change issues successfully is contingent on actions taken in the energy sector. The researcher propounds that ‘Climate Policy Integration’ (CPI) into energy policy is a viable and insufficiently developed strategy in Malaysia that promotes the synergies between climate change and energy objectives, in order to achieve the targets found in both climate change and energy policies. In exploring this hypothesis, this paper presentation will focus on two particular aspects. Firstly, the meaning of CPI as an approach and as a concept will be explored. As an approach, CPI into energy policy means the integration of climate change objectives into the energy policy area. Its subject matter focuses on establishing the functional interrelations between climate change and energy objectives, by promoting their synergies and minimising their contradictions. However, its conceptual underpinnings are less than straightforward. Drawing from the ‘principle of integration’ found in international treaties and declarations such as the Stockholm Declaration 1972, the Rio Declaration 1992 and the United Nations Framework on Climate Change 1992 (‘UNFCCC’), this paper presentation will explore the contradictions in international standards on how the sustainable development tenets of environmental sustainability, social development and economic development are to be balanced and its relevance to CPI. Further, the researcher will consider whether authority may be derived from international treaties and declarations in order to argue for the prioritisation of environmental sustainability over the other sustainable development tenets through CPI. Secondly, this paper presentation will also explore the degree to which CPI into energy policy has been achieved and pursued in Malaysia. In particular, the strength of the conceptual framework with regard to CPI in Malaysian governance will be considered by assessing Malaysia’s National Policy on Climate Change (2009) (‘NPCC 2009’). The development (or the lack of) of CPI as an approach since the publication of the NPCC 2009 will also be assessed based on official government documents and policies that may have a climate change and/or energy agenda. Malaysia’s National Renewable Energy Policy and Action Plan (2010), draft National Energy Efficiency Action Plan (2014), Intended Nationally Determined Contributions (2015) in relation to the Paris Agreement, 11th Malaysia Plan (2015) and Biennial Update Report to the UNFCCC (2015) will be discussed. These documents will be assessed for the presence of CPI based on the language/drafting of the documents as well as the degree of subject matter regarding CPI expressed in the documents. Based on the analysis, the researcher will propose solutions on how to improve Malaysia’s climate change and energy governance. The theory of reflexive governance will be applied to CPI. The concluding remarks will be about whether CPI reflects reflexive governance by demonstrating how the governance process can be the object of shaping outcomes.Keywords: climate policy integration, mainstreaming, policy coherence, Malaysian energy governance
Procedia PDF Downloads 19858 Television Sports Exposure and Rape Myth Acceptance: The Mediating Role of Sexual Objectification of Women
Authors: Sofia Mariani, Irene Leo
Abstract:
The objective of the present study is to define the mediating role of attitudes that objectify and devalue women (hostile sexism, benevolent sexism, and sexual objectification of women) in the indirect correlation between exposure to televised sports and acceptance of rape myths. A second goal is to contribute to research on the topic by defining the role of mediators in exposure to different types of sports, following the traditional gender classification of sports. Data collection was carried out by means of an online questionnaire, measuring television sport exposure, sport type, hostile sexism, benevolent sexism, and sexual objectification of women. Data analysis was carried out using IBM SPSS software. The model used was created using Ordinary Least Squares (OLS) regression path analysis. The predictor variable in the model was television sports exposure, the outcome was rape myths acceptance, and the mediators were (1) hostile sexism, (2) benevolent sexism, and (3) sexual objectification of women. Correlation analyses were carried out dividing by sport type and controlling for the participants’ gender. As seen in existing literature, television sports exposure was found to be indirectly and positively related to rape myth acceptance through the mediating role of: (1) hostile sexism, (2) benevolent sexism, and (3) sexual objectification of women. The type of sport watched influenced the role of the mediators: hostile sexism was found to be the common mediator to all sports type, exposure to traditionally considered feminine or neutral sports showed the additional mediation effect of sexual objectification of women. In line with existing literature, controlling for gender showed that the only significant mediators were hostile sexism for male participants and benevolent sexism for female participants. Given the prevalence of men among the viewers of traditionally considered masculine sports, the correlation between television sports exposure and rape myth acceptance through the mediation of hostile sexism is likely due to the gender of the participants. However, this does not apply to the viewers of traditionally considered feminine and neutral sports, as this group is balanced in terms of gender and shows a unique mediation: the correlation between television sports exposure and rape myth acceptance is mediated by both hostile sexism and sexual objectification. Given that hostile sexism is defined as hostility towards women who oppose or fail to conform to traditional gender roles, these findings confirm that sport is perceived as a non-traditional activity for women. Additionally, these results imply that the portrayal of women in traditionally considered feminine and neutral sports - which are defined as such because of their aesthetic characteristics - may have a strong component of sexual objectification of women. The present research contributes to defining the association between sports exposure and rape myth acceptance through the mediation effects of sexist attitudes and sexual objectification of women. The results of this study have practical implications, such as supporting the feminine sports teams who ask for more practical and less revealing uniforms, more similar to their male colleagues and therefore less objectifying.Keywords: television exposure, sport, rape myths, objectification, sexism
Procedia PDF Downloads 10057 Speech Acts of Selected Classroom Encounters: Analyzing the Speech Acts of a Career Technology Lesson
Authors: Michael Amankwaa Adu
Abstract:
Effective communication in the classroom plays a vital role in ensuring successful teaching and learning. In particular, the types of language and speech acts teachers use shape classroom interactions and influence student engagement. This study aims to analyze the speech acts employed by a Career Technology teacher in a junior high school. While much research has focused on speech acts in language classrooms, less attention has been given to how these acts operate in non-language subject areas like technical education. The study explores how different types of speech acts—directives, assertives, expressives, and commissives—are used during three classroom encounters: lesson introduction, content delivery, and classroom management. This research seeks to fill the gap in understanding how teachers of non-language subjects use speech acts to manage classroom dynamics and facilitate learning. The study employs a mixed-methods design, combining qualitative and quantitative approaches. Data was collected through direct classroom observation and audio recordings of a one-hour Career Technology lesson. The transcriptions of the lesson were analyzed using John Searle’s taxonomy of speech acts, classifying the teacher’s utterances into directives, assertives, expressives, and commissives. Results show that directives were the most frequently used speech act, accounting for 59.3% of the teacher's utterances. These speech acts were essential in guiding student behavior, giving instructions, and maintaining classroom control. Assertives made up 20.4% of the speech acts, primarily used for stating facts and reinforcing content. Expressives, at 14.2%, expressed emotions such as approval or frustration, helping to manage the emotional atmosphere of the classroom. Commissives were the least used, representing 6.2% of the speech acts, often used to set expectations or outline future actions. No declarations were observed during the lesson. The findings of this study reveal the critical role that speech acts play in managing classroom behavior and delivering content in technical subjects. Directives were crucial for ensuring students followed instructions and completed tasks, while assertives helped in reinforcing lesson objectives. Expressives contributed to motivating or disciplining students, and commissives, though less frequent, helped set clear expectations for students’ future actions. The absence of declarations suggests that the teacher prioritized guiding students over making formal pronouncements. These insights can inform teaching strategies across various subject areas, demonstrating that a diverse use of speech acts can create a balanced and interactive learning environment. This study contributes to the growing field of pragmatics in education and offers practical recommendations for educators, particularly in non-language classrooms, on how to utilize speech acts to enhance both classroom management and student engagement.Keywords: classroom interaction, pragmatics, speech acts, teacher communication, career technology
Procedia PDF Downloads 20