Search results for: comprehensive care
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6173

Search results for: comprehensive care

1043 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights

Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy

Abstract:

The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.

Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems

Procedia PDF Downloads 60
1042 Integration and Translation: The Comparison of Religious Rituals of Caodaism in Vietnam and Yi-Kuan-Tao

Authors: Lim Pey Huan

Abstract:

In the second half of the 19th century, Vietnam has long been influenced by Han culture, so there are many similarities in religion and folk beliefs. Even after the acceptance process of the Catholic Church introduced from Europe is quite similar. Therefore, in the spiritual life of Vietnamese civil society, Confucianism, Buddhism, Taoism, Christianity, Islam, and folk beliefs can be said to be the main trend, but in the twentieth century, two indigenous new religions were born: Caodai and He Hao Jiao, both of which are produced and developed in the south, each of which has millions of believers and become important Vietnamese religions. Their political participation has a major impact on the development of the Republic of Vietnam, and their fate is also in the north and south. Significant changes have taken place after reunification. Caodai was later approved by the colonial authorities and became the third largest religion in Vietnam. The teachings of Caodai teach the ideas of the major religions of the world. The classics used in the teachings also contain important theories of various religions, with particular emphasis on the comprehensiveness of the three sects of Confucianism, Buddhism, and Taoism. The obvious manifestation lies in the interpretation of the important proposition of 'opening the three religions and returning to the five branches.' The full name of Caodaism is 'Da Dao San Qi Pu Du Gao Tai Jiao'. This name coincides with the 'Longhua Club' and the 'San Qi Mo Jie' idea and the consistent central idea. The emerging road of Caodai advocates to lead the sentient beings back to their original missions; the sentient beings will be centered on people, and the nature of the talks is nothing more than the original mission and standard. There are many opinions about the introduction of Caodaism into southern Vietnam. Caodai believers believe that Caodaism is an emerging new religion in Vietnam. If we further explore the teachings and religious rituals of Caodai, it is not difficult to find that many Chinese sects have been introduced to Vietnam. Some of the colors can be discussed from the spread and influence of Congenital Road in Vietnam. This article will present the author's analysis of the actual process of tutoring in Vietnam's Caodai, and then compare it with the consistent religious experience, trying to explore the Yi-Kuan-Tao and consistent Yi-Kuan-Tao rituals, religious organization, religious teachings, religious life care, and Funeral rituals and other comparative studies.

Keywords: Vietnam, Caodaism, Yi-Kuan-Tao, religious rituals

Procedia PDF Downloads 109
1041 Strategies for Incorporating Intercultural Intelligence into Higher Education

Authors: Hyoshin Kim

Abstract:

Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.

Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education

Procedia PDF Downloads 199
1040 Fight the Burnout: Phase Two of a NICU Nurse Wellness Bundle

Authors: Megan Weisbart

Abstract:

Background/Significance: The Intensive Care Unit (ICU) environment contributes to nurse burnout. Burnout costs include decreased employee compassion, missed workdays, worse patient outcomes, diminished job performance, high turnover, and higher organizational cost. Meaningful recognition, nurturing of interpersonal connections, and mindfulness-based interventions are associated with decreased burnout. The purpose of this quality improvement project was to decrease Neonatal ICU (NICU) nurse burnout using a Wellness Bundle that fosters meaningful recognition, interpersonal connections and includes mindfulness-based interventions. Methods: The Professional Quality of Life Scale Version 5 (ProQOL5) was used to measure burnout before Wellness Bundle implementation, after six months, and will be given yearly for three years. Meaningful recognition bundle items include Online submission and posting of staff shoutouts, recognition events, Nurses Week and Unit Practice Council member gifts, and an employee recognition program. Fostering of interpersonal connections bundle items include: Monthly staff games with prizes, social events, raffle fundraisers, unit blog, unit wellness basket, and a wellness resource sheet. Quick coherence techniques were implemented at staff meetings and huddles as a mindfulness-based intervention. Findings: The mean baseline burnout score of 14 NICU nurses was 20.71 (low burnout). The baseline range was 13-28, with 11 nurses experiencing low burnout, three nurses experiencing moderate burnout, and zero nurses experiencing high burnout. After six months of the Wellness Bundle Implementation, the mean burnout score of 39 NICU nurses was 22.28 (low burnout). The range was 14-31, with 22 nurses experiencing low burnout, 17 nurses experiencing moderate burnout, and zero nurses experiencing high burnout. Conclusion: A NICU Wellness Bundle that incorporated meaningful recognition, fostering of interpersonal connections, and mindfulness-based activities was implemented to improve work environments and decrease nurse burnout. Participation bias and low baseline response rate may have affected the reliability of the data and necessitate another comparative measure of burnout in one year.

Keywords: burnout, NICU, nurse, wellness

Procedia PDF Downloads 73
1039 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 80
1038 Neonatal Subcutaneous Fat Necrosis with Severe Hypercalcemia: Case Report

Authors: Atitallah Sofien, Bouyahia Olfa, Krifi farah, Missaoui Nada, Ben Rabeh Rania, Yahyaoui Salem, Mazigh Sonia, Boukthir Samir

Abstract:

Introduction: Subcutaneous fat necrosis of the newborn (SCFN) is a rare acute hypodermatitis characterized by skin lesions in the form of infiltrated, hard plaques and subcutaneous nodules, with a purplish-red color, occurring between the first and sixth week of life. SCFN is generally a benign condition that spontaneously regresses without sequelae, but it can be complicated by severe hypercalcemia. Methodology: This is a retrospective case report of neonatal subcutaneous fat necrosis complicated with severe hypercalcemia and nephrocalcinosis. Results: This is a case of a female newborn with a family history of a hypothyroid mother on Levothyrox, born to non-consanguineous parents and from a well-monitored pregnancy. The newborn was delivered by cesarean section at 39 weeks gestation due to severe preeclampsia. She was admitted to the Neonatal Intensive Care Unit at 1 hour of life for the management of grade 1 perinatal asphyxia and immediate neonatal respiratory distress related to transient respiratory distress. Hospitalization was complicated by a healthcare-associated infection, requiring intravenous antibiotics for ten days, with a good clinical and biological response. On the 20th day of life, she developed skin lesions in the form of indurated purplish-red nodules on the back and on both arms. A SCFN was suspected. A calcium level test was conducted, which returned a result of 3 mmol/L. The rest of the phosphocalcic assessment was normal, with early signs of nephrocalcinosis observed on renal ultrasound. The diagnosis of SCFN complicated by nephrocalcinosis associated with severe hypercalcemia was made, and the condition improved with intravenous hydration and corticosteroid therapy. Conclusion: SCFN is a rare and generally benign hypodermatitis in newborns with an etiology that is still poorly understood. Despite its benign nature, SCFN can be complicated by hypercalcemia, which can sometimes be life-threatening. Therefore, it is important to conduct a thorough skin examination of newborns, especially those with risk factors, to detect and correct any potential hypercalcemia.

Keywords: subcutaneous fat necrosis, newborn, hypercalcemia, nephrocalcinosis

Procedia PDF Downloads 47
1037 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 118
1036 Efficacy Of Tranexamic Acid On Blood Loss After Primary Total Hip Replacement : A Case-control Study In 154 Patients

Authors: Fedili Benamar, Belloulou Mohamed Lamine, Ouahes Hassane, Ghattas Samir

Abstract:

Introduction: Perioperative blood loss is a frequent cause of complications in total hip replacement (THR). The present prospective study assessed the efficacy of tranexamic acid (Exacyl(®)) in reducing blood loss in primary THR. Hypothesis: Tranexamic acid reduces blood loss in THR. Material and method: -This is a prospective randomized study on the effectiveness of Exacyl (tranexamic acid) in total hip replacement surgery performed on a standardized technique between 2019 and September 2022. -It involved 154 patients, of which 84 received a single injection of Exacyl (group 1) at a dosage of 10 mg/kg over 20 minutes during the perioperative period. -All patients received postoperative thromboprophylaxis with enoxaparin 0.4 ml subcutaneously. -All patients were admitted to the post-interventional intensive care unit for a duration of 24 hours for monitoring and pain management as per the service protocol. Results: 154 patients, of which 84 received a single injection of Exacyl (group 1) and 70 patients patients who did not receive Exacyl perioperatively : (Group 2 ) The average age is 57 +/- 15 years The distribution by gender was nearly equal with 56% male and 44% female; "The distribution according to the ASA score was as follows: 20.2% ASA1, 82.3% ASA2, and 17.5% ASA3. "There was a significant difference in the average volume of intraoperative and postoperative bleeding during the 48 hours." The average bleeding volume for group 1 (received Exacyl) was 614 ml +/- 228, while the average bleeding volume for group 2 was 729 +/- 300, with a chi-square test of 6.35 and a p-value < 0.01, which is highly significant. The ANOVA test showed an F-statistic of 7.11 and a p-value of 0.008. A Bartlett test revealed a chi-square of 6.35 and a p-value < 0.01." "In Group 1 (patients who received Exacyl), 73% had bleeding less than 750 ml (Group A), and 26% had bleeding exceeding 750 ml (Group B). In Group 2 (patients who did not receive Exacyl perioperatively), 52% had bleeding less than 750 ml (Group A), and 47% had bleeding exceeding 750 ml (Group B). "Thus, the use of Exacyl reduced perioperative bleeding and specifically decreased the risk of severe bleeding exceeding 750 ml by 43% with a relative risk (RR) of 1.37 and a p-value < 0.01. The transfusion rate was 1.19% in the population of Group 1 (Exacyl), whereas it was 10% in the population of Group 2 (no Exacyl). It can be stated that the use of Exacyl resulted in a reduction in perioperative blood transfusion with an RR of 0.1 and a p-value of 0.02. Conclusions: The use of Exacyl significantly reduced perioperative bleeding in this type of surgery.

Keywords: acid tranexamic, blood loss, anesthesia, total hip replacement, surgery

Procedia PDF Downloads 59
1035 Management of Acute Biliary Pathology at Gozo General Hospital

Authors: Kristian Bugeja, Upeshala A. Jayawardena, Clarissa Fenech, Mark Zammit Vincenti

Abstract:

Introduction: Biliary colic, acute cholecystitis, and gallstone pancreatitis are some of the most common surgical presentations at Gozo General Hospital (GGH). National Institute for Health and Care Excellence (NICE) guidelines advise that suitable patients with acute biliary problems should be offered a laparoscopic cholecystectomy within one week of diagnosis. There has traditionally been difficulty in achieving this mainly due to the reluctance of some surgeons to operate in the acute setting, limited, timely access to MRCP and ERCP, and organizational issues. Methodology: A retrospective study was performed involving all biliary pathology-related admissions to GGH during the two-year period of 2019 and 2020. Patients’ files and electronic case summary (ECS) were used for data collection, which included demographic data, primary diagnosis, co-morbidities, management, waiting time to surgery, length of stay, readmissions, and reason for readmissions. NICE clinical guidance 188 – Gallstone disease were used as the standard. Results: 51 patients were included in the study. The mean age was 58 years, and 35 (68.6%) were female. The main diagnoses on admission were biliary colic in 31 (60.8%), acute cholecystitis in 10 (19.6%). Others included gallstone pancreatitis in 3 (5.89%), chronic cholecystitis in 2 (3.92%), gall bladder malignancy in 4 (7.84%), and ascending cholangitis in 1 (1.97%). Management included laparoscopic cholecystectomy in 34 (66.7%); conservative in 8 (15.7%) and ERCP in 6 (11.7%). The mean waiting time for laparoscopic cholecystectomy for patients with acute cholecystitis was 74 days – range being between 3 and 146 days since the date of diagnosis. Only one patient who was diagnosed with acute cholecystitis and managed with laparoscopic cholecystectomy was done so within the 7-day time frame. Hospital re-admissions were reported in 5 patients (9.8%) due to vomiting (1), ascending cholangitis (1), and gallstone pancreatitis (3). Discussion: Guidelines were not met for patients presenting to Gozo General Hospital with acute biliary pathology. This resulted in 5 patients being re-admitted to hospital while waiting for definitive surgery. The local issues resulting in the delay to surgery need to be identified and steps are taken to facilitate the provision of urgent cholecystectomy for suitable patients.

Keywords: biliary colic, acute cholecystits, laparoscopic cholecystectomy, conservative management

Procedia PDF Downloads 146
1034 Dynamic Ambulance Deployment to Reduce Ambulance Response Times Using Geographic Information Systems

Authors: Masoud Swalehe, Semra Günay

Abstract:

Developed countries are losing many lives to non-communicable diseases as compared to their developing counterparts. The effects of these diseases are mostly sudden and manifest at a very short time prior to death or a dangerous attack and this has consolidated the significance of emergency medical system (EMS) as one of the vital areas of healthcare service delivery. The primary objective of this research is to reduce ambulance response times (RT) of Eskişehir province EMS since a number of studies have established a relationship between ambulance response times and survival chances of patients especially out of hospital cardiac arrest (OHCA) victims. It has been found out that patients who receive out of hospital medical attention in few (4) minutes after cardiac arrest because of low ambulance response times stand higher chances of survival than their counterparts who take longer times (more than 12 minutes) to receive out of hospital medical care because of higher ambulance response times. The study will make use of geographic information systems (GIS) technology to dynamically reallocate ambulance resources according to demand and time so as to reduce ambulance response times. Geospatial-time distribution of ambulance calls (demand) will be used as a basis for optimal ambulance deployment using system status management (SSM) strategy to achieve much demand coverage with the same number of ambulance resources to cause response time reduction. Drive-time polygons will be used to come up with time specific facility coverage areas and suggesting additional facility candidate sites where ambulance resources can be moved to serve higher demands making use of network analysis techniques. Emergency Ambulance calls’ data from 1st January 2014 to 31st December 2014 obtained from Eskişehir province health directorate will be used in this study. This study will focus on the reduction of ambulance response times which is a key Emergency Medical Services performance indicator.

Keywords: emergency medical services, system status management, ambulance response times, geographic information system, geospatial-time distribution, out of hospital cardiac arrest

Procedia PDF Downloads 289
1033 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 61
1032 A Research on the Improvement of Small and Medium-Sized City in Early-Modern China (1895-1927): Taking Southern Jiangsu as an Example

Authors: Xiaoqiang Fu, Baihao Li

Abstract:

In 1895, the failure of Sino-Japanese prompted the trend of comprehensive and systematic study of western pattern in China. In urban planning and construction, urban reform movement sprang up slowly, which aimed at renovating and reconstructing the traditional cities into modern cities similar to the concessions. During the movement, Chinese traditional city initiated a process of modern urban planning for its modernization. Meanwhile, the traditional planning morphology and system started to disintegrate, on the contrary, western form and technology had become the paradigm. Therefore, the improvement of existing cities had become the prototype of urban planning of early modern China. Currently, researches of the movement mainly concentrate on large cities, concessions, railway hub cities and some special cities resembling those. However, the systematic research about the large number of traditional small and medium-sized cities is still blank, up to now. This paper takes the improvement constructions of small and medium-sized cities in Southern region of Jiangsu Province as the research object. First of all, the criteria of small and medium-sized cities are based on the administrative levels of general office and cities at the county level. Secondly, the suitability of taking the Southern Jiangsu as the research object. The southern area of Jiangsu province called Southern Jiangsu for short, was the most economically developed region in Jiangsu, and also one of the most economically developed and the highest urbanization regions in China. As the most developed agricultural areas in ancient China, Southern Jiangsu formed a large number of traditional small and medium-sized cities. In early modern times, with the help of the Shanghai economic radiation, geographical advantage and powerful economic foundation, Southern Jiangsu became an important birthplace of Chinese national industry. Furthermore, the strong business atmosphere promoted the widespread urban improvement practices, which were incomparable of other regions. Meanwhile, the demonstration of Shanghai, Zhenjiang, Suzhou and other port cities became the improvement pattern of small and medium-sized city in Southern Jiangsu. This paper analyzes the reform movement of the small and medium-sized cities in Southern Jiangsu (1895-1927), including the subjects, objects, laws, technologies and the influence factors of politic and society, etc. At last, this paper reveals the formation mechanism and characteristics of urban improvement movement in early modern China. According to the paper, the improvement of small-medium city was a kind of gestation of the local city planning culture in early modern China,with a fusion of introduction and endophytism.

Keywords: early modern China, improvement of small-medium city, southern region of Jiangsu province, urban planning history of China

Procedia PDF Downloads 244
1031 Comparison of the Toxicity of Silver and Gold Nanoparticles in Murine Fibroblasts

Authors: Šárka Hradilová, Aleš Panáček, Radek Zbořil

Abstract:

Nanotechnologies are considered the most promising fields with high added value, brings new possibilities in various sectors from industry to medicine. With the growing of interest in nanomaterials and their applications, increasing nanoparticle production leads to increased exposure of people and environment with ‘human made’ nanoparticles. Nanoparticles (NPs) are clusters of atoms in the size range of 1–100 nm. Metal nanoparticles represent one of the most important and frequently used types of NPs due to their unique physical, chemical and biological properties, which significantly differ from those of bulk material. Biological properties including toxicity of metal nanoparticles are generally determined by their size, size distribution, shape, surface area, surface charge, surface chemistry, stability in the environment and ability to release metal ions. Therefore, the biological behavior of NPs and their possible adverse effect cannot be derived from the bulk form of material because nanoparticles show unique properties and interactions with biological systems just due to their nanodimensions. Silver and gold NPs are intensively studied and used. Both can be used for instance in surface enhanced Raman spectroscopy, a considerable number of applications of silver NPs is associated with antibacterial effects, while gold NPs are associated with cancer treatment and bio imaging. Antibacterial effects of silver ions are known for centuries. Silver ions and silver-based compounds are highly toxic to microorganisms. Toxic properties of silver NPs are intensively studied, but the mechanism of cytoxicity is not fully understood. While silver NPs are considered toxic, gold NPs are referred to as toxic but also innocuous for eukaryotic cells. Therefore, gold NPs are used in various biological applications without a risk of cell damaging, even when we want to suppress the growth of cancer cells. Thus, gold NPs are toxic or harmless. Because most studies comparing particles of various sizes prepared in various ways, and testing is performed on different cell lines, it is very difficult to generalize. The novelty and significance of our research is focused to the complex biological effects of silver and gold NPs prepared by the same method, have the same parameters and the same stabilizer. That is why we can compare the biological effects of pure nanometals themselves based on their chemical nature without the influence of other variable. Aim of our study therefore is to compare the cytotoxic effect of two types of noble metal NPs focusing on the mechanisms that contribute to cytotoxicity. The study was conducted on murine fibroblasts by selected common used tests. Each of these tests monitors the selected area related to toxicity and together provides a comprehensive view on the issue of interactions of nanoparticles and living cells.

Keywords: cytotoxicity, gold nanoparticles, mechanism of cytotoxicity, silver nanoparticles

Procedia PDF Downloads 237
1030 Double Functionalization of Magnetic Colloids with Electroactive Molecules and Antibody for Platelet Detection and Separation

Authors: Feixiong Chen, Naoufel Haddour, Marie Frenea-Robin, Yves MéRieux, Yann Chevolot, Virginie Monnier

Abstract:

Neonatal thrombopenia occurs when the mother generates antibodies against her baby’s platelet antigens. It is particularly critical for newborns because it can cause coagulation troubles leading to intracranial hemorrhage. In this case, diagnosis must be done quickly to make platelets transfusion immediately after birth. Before transfusion, platelet antigens must be tested carefully to avoid rejection. The majority of thrombopenia (95 %) are caused by antibodies directed against Human Platelet Antigen 1a (HPA-1a) or 5b (HPA-5b). The common method for antigen platelets detection is polymerase chain reaction allowing for identification of gene sequence. However, it is expensive, time-consuming and requires significant blood volume which is not suitable for newborns. We propose to develop a point-of-care device based on double functionalized magnetic colloids with 1) antibodies specific to antigen platelets and 2) highly sensitive electroactive molecules in order to be detected by an electrochemical microsensor. These magnetic colloids will be used first to isolate platelets from other blood components, then to capture specifically platelets bearing HPA-1a and HPA-5b antigens and finally to attract them close to sensor working electrode for improved electrochemical signal. The expected advantages are an assay time lower than 20 min starting from blood volume smaller than 100 µL. Our functionalization procedure based on amine dendrimers and NHS-ester modification of initial carboxyl colloids will be presented. Functionalization efficiency was evaluated by colorimetric titration of surface chemical groups, zeta potential measurements, infrared spectroscopy, fluorescence scanning and cyclic voltammetry. Our results showed that electroactive molecules and antibodies can be immobilized successfully onto magnetic colloids. Application of a magnetic field onto working electrode increased the detected electrochemical signal. Magnetic colloids were able to capture specific purified antigens extracted from platelets.

Keywords: Magnetic Nanoparticles , Electroactive Molecules, Antibody, Platelet

Procedia PDF Downloads 253
1029 Crime Prevention with Artificial Intelligence

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.

Keywords: artificial intelligence, criminology, crime, prevention, prediction

Procedia PDF Downloads 64
1028 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 281
1027 On the Survival of Individuals with Type 2 Diabetes Mellitus in the United Kingdom: A Retrospective Case-Control Study

Authors: Njabulo Ncube, Elena Kulinskaya, Nicholas Steel, Dmitry Pshezhetskiy

Abstract:

Life expectancy in the United Kingdom (UK) has been near constant since 2010, particularly for the individuals of 65 years and older. This trend has been also noted in several other countries. This slowdown in the increase of life expectancy was concurrent with the increase in the number of deaths caused by non-communicable diseases. Of particular concern is the world-wide exponential increase in the number of diabetes related deaths. Previous studies have reported increased mortality hazards among diabetics compared to non-diabetics, and on the differing effects of antidiabetic drugs on mortality hazards. This study aimed to estimate the all-cause mortality hazards and related life expectancies among type 2 diabetes (T2DM) patients in the UK using the time-variant Gompertz-Cox model with frailty. The study also aimed to understand the major causes of the change in life expectancy growth in the last decade. A total of 221 182 (30.8% T2DM, 57.6% Males) individuals aged 50 years and above, born between 1930 and 1960, inclusive, and diagnosed between 2000 and 2016, were selected from The Health Improvement Network (THIN) database of the UK primary care data and followed up to 31 December 2016. About 13.4% of participants died during the follow-up period. The overall all-cause mortality hazard ratio of T2DM compared to non-diabetic controls was 1.467 (1.381-1.558) and 1.38 (1.307-1.457) when diagnosed between 50 to 59 years and 60 to 74 years, respectively. The estimated life expectancies among T2DM individuals without further comorbidities diagnosed at the age of 60 years were 2.43 (1930-1939 birth cohort), 2.53 (1940-1949 birth cohort) and 3.28 (1950-1960 birth cohort) years less than those of non-diabetic controls. However, the 1950-1960 birth cohort had a steeper hazard function compared to the 1940-1949 birth cohort for both T2DM and non-diabetic individuals. In conclusion, mortality hazards for people with T2DM continue to be higher than for non-diabetics. The steeper mortality hazard slope for the 1950-1960 birth cohort might indicate the sub-population contributing to a slowdown in the growth of the life expectancy.

Keywords: T2DM, Gompetz-Cox model with frailty, all-cause mortality, life expectancy

Procedia PDF Downloads 108
1026 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research

Authors: Christoph Opperer

Abstract:

Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.

Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images

Procedia PDF Downloads 56
1025 Economic Analysis of Post-Harvest Losses in Plantain (and Banana): A Case Study of South Western Nigeria

Authors: O. R. Adeniyi, A. Ayandiji

Abstract:

Losses are common in most vegetables because the fruit ripens rapidly and most plantain products can only be stored for a few days thereby limiting their utilization. Plantain (and banana) is highly perishable at the ambient temperature prevalent in the tropics. The specific objective of this study is to identify the socioeconomic characteristics of banana/plantain dealers and determine the perceived effect of the losses incurred in the process of marketing banana/plantain. The study was carried out in Ondo and Lagos states of south-western Nigeria. Purposive sampling technique was used to collect information from “Kolawole plantain depot”, the point of purchase in Ondo State and “Alamutu plantain market” in Mushin the point of sales in Lagos state. Preliminary study was conducted with the use of primary data collected through well-structured questionnaires administered on 60 respondents and 55 fully completed ones analysed. Budgeting, gross margin and multiple linear regression were used for analyses. Most merchants were found to be in the middle age class (30-50 years), majority of whom were female and completed their secondary school education, with eighty percent having more than 5 years’ experience of in banana/plantain marketing. The highest losses were incurred during transportation and these losses constitute about 5.62 percent of the potential total revenue. On the average, loss in gross margin is about ₦6,000.00 per merchant. The impacts of these losses are reflected in the continuously reducing level of their income. Age of the respondents played a major role in determining the level of care in the handling of the fruits. The middle age class tends to be more favoured. In conclusion, the merchants need adequate and sustainable transportation and storage facilities as a matter of utmost urgency. There is the need for government to encourage producers of the product (farmers) by giving them motivating incentives and ensuring that the environment is made conducive also for dealers by providing adequate storage facilities and ready markets locally and possibly for export.

Keywords: post-harvest, losses, plantain, banana, simple regression

Procedia PDF Downloads 302
1024 Inertial Particle Focusing Dynamics in Trapezoid Straight Microchannels: Application to Continuous Particle Filtration

Authors: Reza Moloudi, Steve Oh, Charles Chun Yang, Majid Ebrahimi Warkiani, May Win Naing

Abstract:

Inertial microfluidics has emerged recently as a promising tool for high-throughput manipulation of particles and cells for a wide range of flow cytometric tasks including cell separation/filtration, cell counting, and mechanical phenotyping. Inertial focusing is profoundly reliant on the cross-sectional shape of the channel and its impacts not only on the shear field but also the wall-effect lift force near the wall region. Despite comprehensive experiments and numerical analysis of the lift forces for rectangular and non-rectangular microchannels (half-circular and triangular cross-section), which all possess planes of symmetry, less effort has been made on the 'flow field structure' of trapezoidal straight microchannels and its effects on inertial focusing. On the other hand, a rectilinear channel with trapezoidal cross-sections breaks down all planes of symmetry. In this study, particle focusing dynamics inside trapezoid straight microchannels was first studied systematically for a broad range of channel Re number (20 < Re < 800). The altered axial velocity profile and consequently new shear force arrangement led to a cross-laterally movement of equilibration toward the longer side wall when the rectangular straight channel was changed to a trapezoid; however, the main lateral focusing started to move backward toward the middle and the shorter side wall, depending on particle clogging ratio (K=a/Hmin, a is particle size), channel aspect ratio (AR=W/Hmin, W is channel width, and Hmin is smaller channel height), and slope of slanted wall, as the channel Reynolds number further increased (Re > 50). Increasing the channel aspect ratio (AR) from 2 to 4 and the slope of slanted wall up to Tan(α)≈0.4 (Tan(α)=(Hlonger-sidewall-Hshorter-sidewall)/W) enhanced the off-center lateral focusing position from the middle of channel cross-section, up to ~20 percent of the channel width. It was found that the focusing point was spoiled near the slanted wall due to the dissymmetry; it mainly focused near the bottom wall or fluctuated between the channel center and the bottom wall, depending on the slanted wall and Re (Re < 100, channel aspect ratio 4:1). Eventually, as a proof of principle, a trapezoidal straight microchannel along with a bifurcation was designed and utilized for continuous filtration of a broader range of particle clogging ratio (0.3 < K < 1) exiting through the longer wall outlet with ~99% efficiency (Re < 100) in comparison to the rectangular straight microchannels (W > H, 0.3 ≤ K < 0.5).

Keywords: cell/particle sorting, filtration, inertial microfluidics, straight microchannel, trapezoid

Procedia PDF Downloads 205
1023 Assessment of Post-surgical Donor-Site Morbidity in Vastus lateralis Free Flap for Head and Neck Reconstructive Surgery: An Observational Study

Authors: Ishith Seth, Lyndel Hewitt, Takako Yabe, James Wykes, Jonathan Clark, Bruce Ashford

Abstract:

Background: Vastus lateralis (VL) can be used to reconstruct defects of the head and neck. Whilst the advantages are documented, donor-site morbidity is not well described. This study aimed to assess donor-site morbidity after VL flap harvest. The results will determine future directions for preventative and post-operative care to improve patient health outcomes. Methods: Ten participants (mean age 55 years) were assessed for the presence of donor-site morbidity after VL harvest. Musculoskeletal (pain, muscle strength, muscle length, tactile sensation), quality of life (SF-12), and lower limb function (lower extremity function, gait (function and speed), sit to stand were assessed using validated and standardized procedures. Outcomes were compared to age-matched healthy reference values or the non-operative side. Analyses were conducted using descriptive statistics and non-parametric tests. Results: There was no difference in muscle strength (knee extension), muscle length, ability to sit-to-stand, or gait function (all P > 0.05). Knee flexor muscle strength was significantly less on the operated leg compared to the non-operated leg (P=0.02) and walking speed was slower than age-matched healthy values (P<0.001). Thigh tactile sensation was impaired in 89% of participants. Quality of life was significantly less for the physical health component of the SF-12 (P<0.001). The mental health component of the SF-12 was similar to healthy controls (P=0.26). Conclusion: There was no effect on donor site morbidity with regards to knee extensor strength, pain, walking function, ability to sit-to-stand, and muscle length. VL harvest affected donor-site knee flexion strength, walking speed, tactile sensation, and physical health-related quality of life.

Keywords: vastus lateralis, morbidity, head and neck, surgery, donor-site morbidity

Procedia PDF Downloads 227
1022 An Assessment of Digital Platforms, Student Online Learning, Teaching Pedagogies, Research and Training at Kenya College of Accounting University

Authors: Jasmine Renner, Alice Njuguna

Abstract:

The booming technological revolution is driving a change in the mode of delivery systems especially for e-learning and distance learning in higher education. The report and findings of the study; an assessment of digital platforms, student online learning, teaching pedagogies, research and training at Kenya College of Accounting University (hereinafter 'KCA') was undertaken as a joint collaboration project between the Carnegie African Diaspora Fellowship and input from the staff, students and faculty at KCA University. The participants in this assessment/research met for selected days during a six-week period during which, one-one consultations, surveys, questionnaires, foci groups, training, and seminars were conducted to ascertain 'online learning and teaching, curriculum development, research and training at KCA.' The project was organized into an eight-week project workflow with each week culminating in project activities designed to assess digital online teaching and learning at KCA. The project also included the training of distance learning instructors at KCA and the evaluation of KCA’s distance platforms and programs. Additionally, through a curriculum audit and redesign, the project sought to enhance the curriculum development activities related to of distance learning at KCA. The findings of this assessment/research represent the systematic deliberate process of gathering, analyzing and using data collected from DL students, DL staff and lecturers and a librarian personnel in charge of online learning resources and access at KCA. We engaged in one-on-one interviews and discussions with staff, students, and faculty and collated the findings to inform practices that are effective in the ongoing design and development of eLearning earning at KCA University. Overall findings of the project led to the following recommendations. First, there is a need to address infrastructural challenges that led to poor internet connectivity for online learning, training needs and content development for faculty and staff. Second, there is a need to manage cultural impediments within KCA; for example fears of vital change from one platform to another for effectiveness and Institutional goodwill as a vital promise of effective online learning. Third, at a practical and short-term level, the following recommendations based on systematic findings of the research conducted were as follows: there is a need for the following to be adopted at KCA University to promote the effective adoption of online learning: a) an eLearning compatible faculty lab, b) revision of policy to include an eLearn strategy or strategic management, c) faculty and staff recognitions engaged in the process of training for the adoption and implementation of eLearning and d) adequate website resources on eLearning. The report and findings represent a comprehensive approach to a systematic assessment of online teaching and learning, research and training at KCA.

Keywords: e-learning, digital platforms, student online learning, online teaching pedagogies

Procedia PDF Downloads 175
1021 Early Onset Neonatal Sepsis Pathogens in Malaysian Hospitals: Determining Empiric Antibiotic

Authors: Nazedah Ain Ibrahim, Mohamed Mansor Manan

Abstract:

Treatment of suspected early onset neonatal sepsis (EONS) in Neonatal Intensive Care Unit (NICU) is essential. However, information regarding EONS pathogens may vary between regions. Global perspectives showed Group B Streptococcal (GBS) as the most common causative pathogens, but the widespread use of intrapartum antibiotics has changed the pathogens pattern towards gram negative microorganisms, especially E. coli. Objective of this study is to describe the pathogens isolated, to assess current treatment and risk of EONS. Records of 899 neonates born in three General Hospitals between 2009 until 2012 were retrospectively reviewed. The inclusion criteria were neonates with blood culture taken prior to empiric antibiotics administration and within 72 hours of life. Of the study group, a total of 734 (82%) cases had documented blood culture that met the inclusion criteria. Proven EONS (as confirmed by positive blood culture) was found in 22 (3%) neonates. The majority was isolated with gram positive organisms, 17 (2.3%). In addition, other common gram positive organism isolated were Coagulase negative staphylococci (7) followed by Bacillus sp. (5) and Streptococcus pneumonia (2), and only one case isolated with GBS, Streptococcus spp. and Enterococcus sp. Meanwhile, only five cases of gram negative organisms [Stenotropomonas (xantho) maltophi (1), Haemophilus influenza (1), Spingomonas paucimobilis (1), Enterobacter gergoviae (1) and E. coli (1)] were isolated. A total of 286 (39%) cases were exposed to intrapartum antibiotics and of those, 157 (21.4%) were administered prior to delivery. All grams positive and most gram negative organisms showed sensitivity to the tested antibiotics. Only two rare gram negative organisms showed total resistant. Male, surfactant, caesarean delivery and prolonged rapture of membrane >18hours were a possible risk of proven EONS. Although proven EONS remains uncommon in Malaysia, nonetheless, the effect of intrapartum antibiotics still required continuous surveillance. However, by analyzing isolated pathogens it can be used as treatment guidance in managing suspected EONS.

Keywords: early onset neonatal sepsis, neonates, pathogens, gram positive, gram negative

Procedia PDF Downloads 304
1020 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study

Authors: Insiya Bhalloo

Abstract:

It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.

Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition

Procedia PDF Downloads 339
1019 To Allow or to Forbid: Investigating How Europeans Reason about Endorsing Rights to Minorities: A Vignette Methodology Based Cross-Cultural Study

Authors: Silvia Miele, Patrice Rusconi, Harriet Tenenbaum

Abstract:

An increasingly multi-ethnic Europe has been pushing citizens’ boundaries on who should be entitled and to what extent to practise their own diversity. Indeed, according to a Standard Eurobarometer survey conducted in 2017, immigration is seen by Europeans as the most serious issue facing the EU, and a third of respondents reported they do not feel comfortable interacting with migrants from outside the EU. Many of these come from Muslim countries, accounting for 4.9% of Europe population in 2016. However, the figure is projected to rise up to 14% by 2050. Additionally, political debates have increasingly focused on Muslim immigrants, who are frequently portrayed as difficult to integrate, while nationalist parties across Europe have fostered the idea of insuperable cultural differences, creating an atmosphere of hostility. Using a 3 X 3 X 2 between-subjects design, it was investigated how people reason about endorsing religious and non-religious rights to minorities. An online survey has been administered to university students of three different countries (Italy, Spain and the UK) via Qualtrics, presenting hypothetical scenarios through a vignette methodology. Each respondent has been randomly allocated to one of the three following conditions: Christian, Muslim or non-religious (vegan) target. Each condition entailed three questions about children self-determination rights to exercise some control over their own lives and 3 questions about children nurturance rights of care and protection. Moreover, participants have been required to further elaborate on their answers via free-text entries and have been asked about their contact and quality of contact with the three targets, and to self-report religious, national and ethnic identification. Answers have been recorded on a Likert scale of 1-5, 1 being "not at all", 5 being "very much". A two-way ANCOVA will be used to analyse answers to closed-ended questions, while free-text answers will be coded and data will be dichotomised based on Social Cognitive Domain Theory for four categories: moral, social conventional and psychological reasons, and analysed via ANCOVAs. This study’s findings aim to contribute to the implementation of educational interventions and speak to the introduction of governmental policies on human rights.

Keywords: children's rights, Europe, migration, minority

Procedia PDF Downloads 119
1018 A Focused, High-Intensity Spread-Spectrum Ultrasound Solution to Prevent Biofouling

Authors: Alan T. Sassler

Abstract:

Biofouling is a significant issue for ships, especially those based in warm water ports. Biofouling damages hull coatings, degrades platform hydrodynamics, blocks cooling water intakes, and returns, reduces platform range and speed, and increases fuel consumption. Although platforms are protected to some degree by antifouling paints, these paints are much less effective on stationary platforms, and problematic biofouling can occur on antifouling paint-protected stationary platforms in some environments in as little as a matter of weeks. Remediation hull cleaning operations are possible, but they are very expensive, sometimes result in damage to the vessel’s paint or hull and are generally not completely effective. Ultrasound with sufficient intensity focused on specific frequency ranges can be used to prevent the growth of biofouling organisms. The use of ultrasound to prevent biofouling isn't new, but systems to date have focused on protecting platforms by shaking the hull using internally mounted transducers similar to those used in ultrasonic cleaning machines. While potentially effective, this methodology doesn't scale well to large platforms, and there are significant costs associated with installing and maintaining these systems, which dwarf the initial purchase price. An alternative approach has been developed, which uses highly directional pier-mounted transducers to project high-intensity spread-spectrum ultrasonic energy into the water column focused near the surface. This focused energy has been shown to prevent biofouling at ranges of up to 50 meters from the source. Spreading the energy out over a multi-kilohertz band makes the system both more effective and more environmentally friendly. This system has been shown to be both effective and inexpensive in small-scale testing and is now being characterized on a larger scale in selected marinas. To date, test results have been collected in Florida marinas suggesting that this approach can be used to keep ensonified areas of thousands of square meters free from biofouling, although care must be taken to minimize shaded areas.

Keywords: biofouling, ultrasonic, environmentally friendly antifoulant, marine protection, antifouling

Procedia PDF Downloads 48
1017 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species

Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel

Abstract:

Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.

Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis

Procedia PDF Downloads 68
1016 The Effect of Empathy Training Given to Midwives on Mothers’ Satisfaction with Midwives and Their Birth Perception

Authors: Songul Aktas, Turkan Pasinlioglu, Kiymet Yesilcicek Calik

Abstract:

Introduction: Emphatic approach during labor increases both quality of care and birth satisfaction of mothers. Besides; maternal satisfaction statements and expressions about midwives who assist labor contribute to a positive birth perception and wish to give vaginal delivery again. Aim: The study aimed at investigating the effect of empathy training given to midwives on mothers’ satisfaction with midwives and their birth perception. Material/Method: This experimental study was undertaken between February 2013 and January 2014 at a public hospital in Trabzon Province. The population of the study was composed of mothers who gave vaginal delivery and the sample was composed of 222 mothers determined with power analyzes. Ethical approval and written informed consents were obtained. Mothers who were assisted by midwives during 1st, 2nd and 3rd phases of delivery and first two postpartum hours were included. Empathy training given to midwives included didactic narration, creative drama, psychodrama techniques and lasted 32 hours. The data were collected before the empathy training (BET), right after empathy training (RAET) and 8 weeks later after birth (8WLAB). Mothers were homogenous in terms of socio-demographic, obstetric characteristics. Data were collected with a questionnaire and were analyzed with Chi-square tests. Findings: Rate of mother’s satisfaction with midwives was 36.5% in BET, 81.1% in RAET and 75.7% in 8WLAB. Key mother’s satisfaction with midwives were as follows: 27.6% of mothers told that midwives were “smiling-kind” in BET, 39.6% of them in RAET and 33.7% of them in 8WLAB; 31% of mothers told that midwives were “understanding” in BET, 38.2% of them in RAET and 33.7% of them in 8WLAB; 15.7% of mothers told that midwives were “reassuring” in BET, 44.9% of them in RAET and 39.3% of them in 8WLAB;19.5% of mothers told that midwives were “encouraging and motivating” in BET, 39.8% of them in RAET and 19.8% of mothers told that midwives were “informative” in BET, 45.6% of them in RAET and 35.1% of them in 8WLAB (p<0.05). Key mother’s dissatisfaction with midwives were as follows: 55.3% of mothers told that midwives were “poorly-informed” in BET, 17% of them in RAET and 27.7% of them in 8WLAB; 56.9% of mothers told that midwives were “poorly-listening” in BET, 17.6% of them in RAET and 25.5% of them in 8WLAB; 53.2% of mothers told that midwives were “judgmental-embarrassing” in BET, 17% of them in RAET and 29.8% of them in 8WLAB; 56.2% of mothers told that midwives had “fierce facial expressions” in BET, 15.6% of them in RAET and 28.1% of them in 8WLAB. Rates of mothers’ perception that labor was “easy” were 8.1% in BET, 21.6% in RAET and 13.5% in 8WLAB and rates of mothers’ perception that labor was “very difficult and tiring” were 41.9% in BET, 5.4% in RAET and 13.5% in 8WLAB (p<0.05). Conclusion: The effect of empathy training given to midwives upon statements that described mothers’ satisfaction with midwives and their birth perception was positive. Note: This study was financially funded by TUBİTAK project with number 113S672.

Keywords: empathy training, labor perception, mother’s satisfaction with midwife, vaginal delivery

Procedia PDF Downloads 351
1015 Systems Lens: Towards Sustainable Management of Maintenance and Renewal of Wire-Based Infrastructure: The Case of Water Network in the City of Linköping, Sweden

Authors: E. Hegazy, S. Anderberg, J. Krook

Abstract:

The city's wire-based infrastructure systems (WBIS) are responsible for the delivery of electricity, telecommunications, sanitation, drainage, and district heating and are a necessity for sustainable modern urban life. Maintaining the functionality of these structures involves high costs and, brings disturbance to the local community and effects on the environment. One key reason for this is that the cables and pipes are placed under streets, making system parts easily worn and their service lifetime reduced, and all maintenance and renewal rely on recurrent needs for excavation. In Sweden, a significant part of wire-based infrastructure is already outdated and will need to be replaced in the coming decades. The replacement of these systems will entail massive costs as well as important traffic disruption and environmental disturbance. However, this challenge may also open a unique opportunity to introduce new, more sustainable technologies and management practices. The transformation of WBIS management for long-term sustainability and meeting maintenance and renewal needs does not have a comprehensive approach. However, a systemic approach may inform WBIS management. This approach considers both technical and non-technical aspects, as well as time-related factors. Nevertheless, there is limited systemic knowledge of how different factors influence current management practices. The aim of this study is to address this knowledge gap and contribute to the understanding of what factors influence the current practice of WBIS management. A case study approach is used to identify current management practices, the underlying factors that influence them, and their implications for sustainability outcomes. The case study is based on both quantitative data on the local system and data from interviews and workshops with local practitioners and other stakeholders. Linköping was selected as a case since it provided good accessibility to the water administration and relevant data for analyzing water infrastructure management strategies. It is a sufficiently important city in Sweden to be able to identify challenges, which, to some extent, are common to all Swedish cities. By uncovering current practices and what is influencing Linköping, knowledge gaps and uncertainties related to sustainability consequences were highlighted. The findings show that goals, priorities, and policies controlling management are short-termed, and decisions on maintenance and renewal are often restricted to finding solutions to the most urgent issues. Sustainability transformation in the infrastructure area will not be possible through individual efforts without coordinated technical, organizational, business, and regulatory changes.

Keywords: case study, infrastructure, management, practice, Sweden

Procedia PDF Downloads 69
1014 Fly ash Contamination in Groundwater and its Implications on Local Climate Change

Authors: Rajkumar Ghosh

Abstract:

Fly ash, a byproduct of coal combustion, has become a prevalent environmental concern due to its potential impact on both groundwater quality and local climate change. This study aims to provide an in-depth analysis of the various mechanisms through which fly ash contaminates groundwater, as well as the possible consequences of this contamination on local climate change. The presence of fly ash in groundwater not only poses a risk to human health but also has the potential to influence local climate change through complex interactions. Although fly ash has various applications in construction and other industries, improper disposal and lack of containment measures have led to its infiltration into groundwater systems. Through a comprehensive review of existing literature and case studies, the interactions between fly ash and groundwater systems, assess the effects on hydrology, and discuss the implications for the broader climate. This section reviews the pathways through which fly ash enters groundwater, including leaching from disposal sites, infiltration through soil, and migration from surface water bodies. The physical and chemical characteristics of fly ash that contribute to its mobility and persistence in groundwater. The introduction of fly ash into groundwater can alter its chemical composition, leading to an increase in the concentration of heavy metals, metalloids, and other potentially toxic elements. The mechanisms of contaminant transport and highlight the potential risks to human health and ecosystems. Fly ash contamination in groundwater may influence the hydrological cycle through changes in groundwater recharge, discharge, and flow dynamics. This section examines the implications of altered hydrology on local water availability, aquatic habitats, and overall ecosystem health. The presence of fly ash in groundwater may have direct and indirect effects on local climate change. The role of fly ash as a potent greenhouse gas absorber and its contribution to radiative forcing. Additionally, investigation of the possible feedback mechanisms between groundwater contamination and climate change, such as altered vegetation patterns and changes in local temperature and precipitation patterns. In this section, potential mitigation and remediation techniques to minimize fly ash contamination in groundwater are analyzed. These may include improved waste management practices, engineered barriers, groundwater remediation technologies, and sustainable fly ash utilization. This paper highlights the critical link between fly ash contamination in groundwater and its potential contribution to local climate change. It emphasizes the importance of addressing this issue promptly through a combination of preventive measures, effective management strategies, and continuous monitoring. By understanding the interconnections between fly ash contamination, groundwater quality, and local climate, towards creating a more resilient and sustainable environment for future generations. The findings of this research can assist policymakers and environmental managers in formulating sustainable strategies to mitigate fly ash contamination and minimize its contribution to climate change.

Keywords: groundwater, climate, sustainable environment, fly ash contamination

Procedia PDF Downloads 69