Search results for: test case generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22000

Search results for: test case generation

640 The in Vitro and in Vivo Antifungal Activity of Terminalia Mantaly on Aspergillus Species Using Drosophila melanogaster (UAS-Diptericin) As a Model

Authors: Ponchang Apollos Wuyep, Alice Njolke Mafe, Longchi Satkat Zacheaus, Dogun Ojochogu, Dabot Ayuba Yakubu

Abstract:

Fungi causes huge losses when infections occur both in plants and animals. Synthetic Antifungal drugs are mostly very expensive and highly cytotoxic when taken. This study was aimed at determining the in vitro and in vivo antifungal activities of the leaves and stem extracts of Terminalia mantaly (Umbrella tree)H. Perrier on Aspergillus species in a bid to identify potential sources of cheap starting materials for the synthesis of new drugs to address the growing antimicrobial resistance. T. mantaly leave and stem powdered plant was extracted by fractionation using the method of solvent partition co-efficient in their graded form in the order n-hexane, Ethyl acetate, methanol and distilled water and phytochemical screening of each fraction revealed the presence of alkaloids, saponins, Tannins, flavonoids, carbohydrates, steroids, anthraquinones, cardiac glycosides and terpenoids in varying degrees. The Agar well diffusion technique was used to screen for antifungal activity of the fractions on clinical isolates of Aspergillus species (Aspergillus flavus and Aspergillus fumigatus). Minimum inhibitory concentration (MIC50) of the most active extracts was determined by the broth dilution method. The fractions test indicated a high antifungal activity with zones of inhibition ranging from 6 to 26 mm and 8 to 30mm (leave fractions) and 10mm to 34mm and 14mm to36mm (stem fractions) on A. flavus and A. fumigatus respectively. All the fractions indicated antifungal activity in a dose response relationship at concentrations of 62.5mg/ml, 125mg/ml, 250mg/ml and 500mg/ml. Better antifungal efficacy was shown by the Ethyl acetate, Hexane and Methanol fractions in the in vitro as the most potent fraction with MIC ranging from 62.5 to 125mg/ml. There was no statistically significant difference (P>0.05) in the potency of the Eight fractions from leave and stem (Hexane, Ethyl acetate, methanol and distilled water, antifungal (fluconazole), which served as positive control and 10% DMSO(Dimethyl Sulfoxide)which served as negative control. In the in vivo investigations, the ingestion technique was used for the infectious studies Female Drosophilla melanogaster(UAS-Diptericin)normal flies(positive control),infected and not treated flies (negative control) and infected flies with A. fumigatus and placed on normal diet, diet containing fractions(MSM and HSM each at concentrations of 10mg/ml 20mg/ml, 30mg/ml, 40mg/ml, 50mg/ml, 60mg/ml, 70mg/ml, 80mg/ml, 90mg/ml and 100mg/ml), diet containing control drugs(fluconazole as positive control)and infected flies on normal diet(negative control), the flies were observed for fifteen(15) days. Then the total mortality of flies was recorded each day. The results of the study reveals that the flies were susceptible to infection with A. fumigatus and responded to treatment with more effectiveness at 50mg/ml, 60mg/ml and 70mg/ml for both the Methanol and Hexane stem fractions. Therefore, the Methanol and Hexane stem fractions of T. mantaly contain therapeutically useful compounds, justifying the traditional use of this plant for the treatment of fungal infections.

Keywords: Terminalia mantaly, Aspergillus fumigatus, cytotoxic, Drosophila melanogaster, antifungal

Procedia PDF Downloads 68
639 Medium-Scale Multi-Juice Extractor for Food Processing

Authors: Flordeliza L. Mercado, Teresito G. Aguinaldo, Helen F. Gavino, Victorino T. Taylan

Abstract:

Most fruits and vegetables are available in large quantities during peak season which are oftentimes marketed at low price and left to rot or fed to farm animals. The lack of efficient storage facilities, and the additional cost and unavailability of small machinery for food processing, results to low price and wastage. Incidentally, processed fresh fruits and vegetables are gaining importance nowadays and health conscious people are also into ‘juicing’. One way to reduce wastage and ensure an all-season availability of crop juices at reasonable costs is to develop equipment for effective extraction of juice. The study was conducted to design, fabricate and evaluate a multi-juice extractor using locally available materials, making it relatively cheaper and affordable for medium-scale enterprises. The study was also conducted to formulate juice blends using extracted juices and calamansi juice at different blending percentage, and evaluate its chemical properties and sensory attributes. Furthermore, the chemical properties of extracted meals were evaluated for future applications. The multi-juice extractor has an overall dimension of 963mm x 300mm x 995mm, a gross weight of 82kg and 5 major components namely; feeding hopper, extracting chamber, juice and meal outlet, transmission assembly, and frame. The machine performance was evaluated based on juice recovery, extraction efficiency, extraction rate, extraction recovery, and extraction loss considering type of crop as apple and carrot with three replications each and was analyzed using T-test. The formulated juice blends were subjected to sensory evaluation and data gathered were analyzed using Analysis of Variance appropriate for Complete Randomized Design. Results showed that the machine’s juice recovery (73.39%), extraction rate (16.40li/hr), and extraction efficiency (88.11%) for apple were significantly higher than for carrot while extraction recovery (99.88%) was higher for apple than for carrot. Extraction loss (0.12%) was lower for apple than for carrot, but was not significantly affected by crop. Based on adding percentage mark-up on extraction cost (Php 2.75/kg), the breakeven weight and payback period for a 35% mark-up is 4,710.69kg and 1.22 years, respectively and for a 50% mark-up, the breakeven weight is 3,492.41kg and the payback period is 0.86 year (10.32 months). Results on the sensory evaluation of juice blends showed that the type of juice significantly influenced all the sensory parameters while the blending percentage including their respective interaction, had no significant effect on all sensory parameters, making the apple-calamansi juice blend more preferred than the carrot-calamansi juice blend in terms of all the sensory parameter. The machine’s performance is higher for apple than for carrot and the cost analysis on the use of the machine revealed that it is financially viable with a payback period of 1.22 years (35% mark-up) and 0.86 year (50% mark-up) for machine cost, generating an income of Php 23,961.60 and Php 34,444.80 per year using 35% and 50% mark-up, respectively. The juice blends were of good qualities based on the values obtained in the chemical analysis and the extracted meal could also be used to produce another product based on the values obtained from proximate analysis.

Keywords: food processing, fruits and vegetables, juice extraction, multi-juice extractor

Procedia PDF Downloads 290
638 Accidental U.S. Taxpayers Residing Abroad: Choosing between U.S. Citizenship or Keeping Their Local Investment Accounts

Authors: Marco Sewald

Abstract:

Due to the current enforcement of exterritorial U.S. legislation, up to 9 million U.S. (dual) citizens residing abroad are subject to U.S. double and surcharge taxation and at risk of losing access to otherwise basic financial services and investment opportunities abroad. The United States is the only OECD country that taxes non-resident citizens, lawful permanent residents and other non-resident aliens on their worldwide income, based on local U.S. tax laws. To enforce these policies the U.S. has implemented ‘saving clauses’ in all tax treaties and implemented several compliance provisions, including the Foreign Account Tax Compliance Act (FATCA), Qualified Intermediaries Agreements (QI) and Intergovernmental Agreements (IGA) addressing Foreign Financial Institutions (FFIs) to implement these provisions in foreign jurisdictions. This policy creates systematic cases of double and surcharge taxation. The increased enforcement of compliance rules is creating additional report burdens for U.S. persons abroad and FFIs accepting such U.S. persons as customers. FFIs in Europe react with a growing denial of specific financial services to this population. The numbers of U.S. citizens renouncing has dramatically increased in the last years. A case study is chosen as an appropriate methodology and research method, as being an empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used. This evaluative approach is testing whether the combination of policies works in practice, or whether they are in accordance with desirable moral, political, economical aims, or may serve other causes. The research critically evaluates the financial and non-financial consequences and develops sufficient strategies. It further discusses these strategies to avoid the undesired consequences of exterritorial U.S. legislation. Three possible strategies are resulting from the use cases: (1) Duck and cover, (2) Pay U.S. double/surcharge taxes, tax preparing fees and accept imposed product limitations and (3) Renounce U.S. citizenship and pay possible exit taxes, tax preparing fees and the requested $2,350 fee to renounce. While the first strategy is unlawful and therefore unsuitable, the second strategy is only suitable if the U.S. citizen residing abroad is planning to move to the U.S. in the future. The last strategy is the only reasonable and lawful way provided by the U.S. to limit the exposure to U.S. double and surcharge taxation and the limitations on financial products. The results are believed to add a perspective to the current academic discourse regarding U.S. citizenship based taxation, currently dominated by U.S. scholars, while providing sufficient strategies for the affected population at the same time.

Keywords: citizenship based taxation, FATCA, FBAR, qualified intermediaries agreements, renounce U.S. citizenship

Procedia PDF Downloads 193
637 Study of Silent Myocardial Ischemia in Type 2 Diabeic Males: Egyptian Experience

Authors: Ali Kassem, Yhea Kishik, Ali Hassan, Mohamed Abdelwahab

Abstract:

Introduction: Accelerated coronary and peripheral vascular atherosclerosis is one of the most common and chronic complications of diabetes mellitus. A recent aspect of coronary artery disease in this condition is its silent nature. The aim of the work: Detection of the prevalence of silent myocardial ischemia (SMI) in Upper Egypt type 2 diabetic males and to select male diabetic population who should be screened for SMI. Patients and methods: 100 type 2 diabetic male patients with a negative history of angina or anginal equivalent symptoms and 30 healthy control were included. Full medical history and thorough clinical examination were done for all participants. Fasting and post prandial blood glucose level, lipid profile, (HbA1c), microalbuminuria, and C-reactive protein were done for all participants Resting ECG, trans-thoracic echocardiography, treadmill exercise ECG, myocardial perfusion imaging were done for all participants and patients positive for one or more NITs were subjected for coronary angiography. Results Twenty nine patients (29%) were positive for one or more NITs in the patients group compared to only one case (3.3%) in the controls. After coronary angiography, 20 patients were positive for significant coronary artery stenosis in the patients group, while it was refused to be done by the patient in the controls. There were statistical significant difference between the two groups regarding, hypertension, dyslipidemia and obesity, family history of DM and IHD with higher levels of microalbuminuria, C-reactive protein, total lipids in patient group versus controls According to coronary angiography, patients were subdivided into two subgroups, 20 positive for SMI (positive for coronary angiography) and 80 negative for SMI (negative for coronary angiography). No statistical difference regarding family history of DM and type of diabetic therapy was found between the two subgroups. Yet, smoking, hypertension, obesity, dyslipidemia and family history of IHD were significantly higher in diabetics positive versus those negative for SMI. 90% of patients in subgroup positive for SMI had two or more cardiac risk factors while only two patients had one cardiac risk factor (10%). Uncontrolled DM was detected more in patients positive for SMI. Diabetic complications were more prevalent in patients positive for SMI versus those negative for SMI. Most of the patients positive for SMI have DM more than 5 years duration. Resting ECG and resting Echo detected only 6 and 11 cases, respectively, of the 20 positive cases in group positive for SMI compared to treadmill exercise ECG and myocardial perfusion imaging that detected 16 and 18 cases respectively, Conclusion: Type 2 diabetic male patients should be screened for detection of SMI when aged above 50 years old, diabetes duration is more than 5 years, presence of two or more cardiac risk factors and/or patients suffering from one or more of the chronic diabetic complications. CRP, is an important parameter for selection of type 2 diabetic male patients who should be screened for SMI. Non invasive cardiac tests are reliable for screening of SMI in these patients in our locality.

Keywords: C-reactive protein, Silent myocardial ischemia, Stress tests, type 2 DM

Procedia PDF Downloads 375
636 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti

Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms

Abstract:

Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.

Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing

Procedia PDF Downloads 111
635 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 290
634 Provide Adequate Protection to Avoid Secondary Victimization: Ensuring the Rights of the Child Victims in the Criminal Justice System

Authors: Muthukuda Arachchige Dona Shiroma Jeeva Shirajanie Niriella

Abstract:

The necessity of protection of the rights of victims of crime is a matter of concerns today. In the criminal justice system, child victims who are subjected to sexual abuse/violence are more vulnerable than the other crime victims. When they go to the police to lodge the complaint and until the end of the court proceedings, these victims are re-victimized in the criminal justice system. The rights of the suspects, accused and convicts are recognized and guaranteed by the constitution under fair trial norm, contemporary penal laws where crime is viewed as an offence against the State and existing criminal justice system in many jurisdictions including Sri Lanka. In this backdrop, a reasonable question arises as to whether the existing criminal justice system, especially which follow the adversarial mode of judicial trial protect the fair trial norm in the criminal justice process. Therefore, this paper intends to discuss the rights of the sexually abused child victims in the criminal justice system in order to restore imbalance between the rights of the wrongdoer and victim and suggest legal reforms to strengthen their rights in the criminal justice system which is essential to end secondary victimization. The paper considers Sri Lanka as a sample to discuss this issue. The paper looks at how the child victims are marginalized in the traditional adversarial model of the justice process, whether the contemporary penal laws adequately protect the right of these victims and whether the current laws set out the provisions to provide sufficient assistance and protection to them. The study further deals with the important principles adopted in international human rights law relating to the protection of the rights of the child victims in sexual offences cases. In this research paper, rights of the child victims in the investigation, trial and post-trial stages in the criminal justice process will be assessed. This research contains an extensive scrutiny of relevant international standards and local statutory provisions. Case law, books, journal articles, government publications such as commissions’ reports under this topic are rigorously reviewed as secondary resources. Further, randomly selected 25 child victims of sexual offences from the decided cases in last two years, police officers from 5 police divisions where the highest numbers of sexual offences were reported in last two years and the judicial officers both Magistrates and High Court Judges from the same judicial zones are interviewed. These data will be analyzed in order to find out the reasons for this specific sexual victimization, needs of these victims in various stages of the criminal justice system, relationship between victimization and offending and the difficulties and problems that these victims come across in criminal justice system. The author argues that the child victims are considerably neglected and their rights are not adequately protected in the adversarial model of the criminal justice process.

Keywords: child victims of sexual violence, criminal justice system, international standards, rights of child victims, Sri Lanka

Procedia PDF Downloads 357
633 Foreseen the Future: Human Factors Integration in European Horizon Projects

Authors: José Manuel Palma, Paula Pereira, Margarida Tomás

Abstract:

Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).

Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0

Procedia PDF Downloads 41
632 Emphasis on Difference: Ethnic and National Cultural Heritage Identities and Issues in East Asia Focusing on Korea Cases

Authors: Hyuk-Jin Lee

Abstract:

Even though 23 years have passed in the 21st century, nation-state and nationality-centered cultural identities are still the sentiments and ideologies that dominate the world. Nevertheless, as seen in many cases in Europe, a new perspective is needed to recognize mutual exchanges and influences and to view them as natural cultural exchanges between countries. The situation in East Asia is completely different from Europe. This is presumed to be from the long tradition of having an ethnocentric state concept for at least hundreds of years, quite different from Europe, where the concept of a nation-state was established relatively recently. In other words, unlike Europe, where active exchanges took place, the problem stems from the unique characteristics of East Asia, which has a strong tradition of finding its identity in 'difference'. Thus, it would not be hard to find cultural studies or news of the three East Asian countries emphasizing differences among one another. This applies to all cultural areas, including traditional architecture. For example, in the Korean traditional architecture field, buildings with effects from neighboring countries tend to be ignored, even if they are traditional Korean architecture. In addition to this, in the case of Korea, there seems to be one more cultural harmful aftereffect caused by the 36 years of Japanese colonial rule in the early 20th century; the obsessive filtering concept of 'it must be different from Japan'. In other words, the implicit ideological coercion that the definition of 'Korean cultural heritage' should not be influenced by exchanges with Japan may be found throughout Korean studies. The architectural and cultural aspects of the vast period of time, from the Three Kingdoms era to the beginning of Joseon, which was a period in which cultural influence exchanges with neighboring countries were relatively strong compared to the late Joseon Dynasty, also reflect the 'distorted filtering' caused by finding a repulsive identity against the Japanese colonial period. It is important to look the cultural heritage and traditions as they are inductively, not deductively. If not, we may often ignore or limit our own precious cultural heritage. Conversely, If Baekje, the ancient Korean Kingdom, helped Japan in construction and craftsmen played a big role in building the ancient temple, it would be a healthier perspective to view it as a cultural exchange rather than proudly seeing it as a cultural owner's perspective because this point of view is a proper reconstruction of our ancient and medieval Asian culture (strictly speaking, the color common to East Asia at the time). In particular, this study will examine this topic by giving specific examples from each field of Korean cultural studies. In the search for cultural identity, it would be more helpful for healthy relations between countries and collaborative research in the sensitive part of the interpretation of historical facts as well as cultural circles to minimize excessive meanings on originality and difference.

Keywords: cultural heritage identity, cultural ideology, East Asia, Korea

Procedia PDF Downloads 65
631 A Sustainability Benchmarking Framework Based on the Life Cycle Sustainability Assessment: The Case of the Italian Ceramic District

Authors: A. M. Ferrari, L. Volpi, M. Pini, C. Siligardi, F. E. Garcia Muina, D. Settembre Blundo

Abstract:

A long tradition in the ceramic manufacturing since the 18th century, primarily due to the availability of raw materials and an efficient transport system, let to the birth and development of the Italian ceramic tiles district that nowadays represents a reference point for this sector even at global level. This economic growth has been coupled to attention towards environmental sustainability issues throughout various initiatives undertaken over the years at the level of the production sector, such as certification activities and sustainability policies. In this way, starting from an evaluation of the sustainability in all its aspects, the present work aims to develop a benchmarking helping both producers and consumers. In the present study, throughout the Life Cycle Sustainability Assessment (LCSA) framework, the sustainability has been assessed in all its dimensions: environmental with the Life Cycle Assessment (LCA), economic with the Life Cycle Costing (LCC) and social with the Social Life Cycle Assessment (S-LCA). The annual district production of stoneware tiles during the 2016 reference year has been taken as reference flow for all the three assessments, and the system boundaries cover the entire life cycle of the tiles, except for the LCC for which only the production costs have been considered at the moment. In addition, a preliminary method for the evaluation of local and indoor emissions has been introduced in order to assess the impact due to atmospheric emissions on both people living in the area surrounding the factories and workers. The Life Cycle Assessment results, obtained from IMPACT 2002+ modified assessment method, highlight that the manufacturing process is responsible for the main impact, especially because of atmospheric emissions at a local scale, followed by the distribution to end users, the installation and the ordinary maintenance of the tiles. With regard to the economic evaluation, both the internal and external costs have been considered. For the LCC, primary data from the analysis of the financial statements of Italian ceramic companies show that the higher cost items refer to expenses for goods and services and costs of human resources. The analysis of externalities with the EPS 2015dx method attributes the main damages to the distribution and installation of the tiles. The social dimension has been investigated with a preliminary approach by using the Social Hotspots Database, and the results indicate that the most affected damage categories are health and safety and labor rights and decent work. This study shows the potential of the LCSA framework applied to an industrial sector; in particular, it can be a useful tool for building a comprehensive benchmark for the sustainability of the ceramic industry, and it can help companies to actively integrate sustainability principles into their business models.

Keywords: benchmarking, Italian ceramic industry, life cycle sustainability assessment, porcelain stoneware tiles

Procedia PDF Downloads 113
630 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group

Authors: Antonio Martín-Pardo

Abstract:

With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.

Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation

Procedia PDF Downloads 108
629 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study

Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin

Abstract:

In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.

Keywords: performance, performance evolution system, productivity, Edirne region

Procedia PDF Downloads 292
628 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials

Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar

Abstract:

Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.

Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)

Procedia PDF Downloads 377
627 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India

Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha

Abstract:

Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.

Keywords: 2D ERT, landslide, safety factor, slope stability

Procedia PDF Downloads 298
626 Design of Experiment for Optimizing Immunoassay Microarray Printing

Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern

Abstract:

Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).

Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing

Procedia PDF Downloads 167
625 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 212
624 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces

Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava

Abstract:

Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.

Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection

Procedia PDF Downloads 208
623 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 132
622 Monitoring of Indoor Air Quality in Museums

Authors: Olympia Nisiforou

Abstract:

The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.

Keywords: exibitions, indoor air quality , VOCs, pollution

Procedia PDF Downloads 112
621 Developing and Testing a Questionnaire of Music Memorization and Practice

Authors: Diana Santiago, Tania Lisboa, Sophie Lee, Alexander P. Demos, Monica C. S. Vasconcelos

Abstract:

Memorization has long been recognized as an arduous and anxiety-evoking task for musicians, and yet, it is an essential aspect of performance. Research shows that musicians are often not taught how to memorize. While memorization and practice strategies of professionals have been studied, little research has been done to examine how student musicians learn to practice and memorize music in different cultural settings. We present the process of developing and testing a questionnaire of music memorization and musical practice for student musicians in the UK and Brazil. A survey was developed for a cross-cultural research project aiming at examining how young orchestral musicians (aged 7–18 years) in different learning environments and cultures engage in instrumental practice and memorization. The questionnaire development included members of a UK/US/Brazil research team of music educators and performance science researchers. A pool of items was developed for each aspect of practice and memorization identified, based on literature, personal experiences, and adapted from existing questionnaires. Item development took the varying levels of cognitive and social development of the target populations into consideration. It also considered the diverse target learning environments. Items were initially grouped in accordance with a single underlying construct/behavior. The questionnaire comprised three sections: a demographics section, a section on practice (containing 29 items), and a section on memorization (containing 40 items). Next, the response process was considered and a 5-point Likert scale ranging from ‘always’ to ‘never’ with a verbal label and an image assigned to each response option was selected, following effective questionnaire design for children and youths. Finally, a pilot study was conducted with young orchestral musicians from diverse learning environments in Brazil and the United Kingdom. Data collection took place in either one-to-one or group settings to facilitate the participants. Cognitive interviews were utilized to establish response process validity by confirming the readability and accurate comprehension of the questionnaire items or highlighting the need for item revision. Internal reliability was investigated by measuring the consistency of the item groups using the statistical test Cronbach’s alpha. The pilot study successfully relied on the questionnaire to generate data about the engagement of young musicians of different levels and instruments, across different learning and cultural environments, in instrumental practice and memorization. Interaction analysis of the cognitive interviews undertaken with these participants, however, exposed the fact that certain items, and the response scale, could be interpreted in multiple ways. The questionnaire text was, therefore, revised accordingly. The low Cronbach’s Alpha scores of many item groups indicated another issue with the original questionnaire: its low level of internal reliability. Several reasons for each poor reliability can be suggested, including the issues with item interpretation revealed through interaction analysis of the cognitive interviews, the small number of participants (34), and the elusive nature of the construct in question. The revised questionnaire measures 78 specific behaviors or opinions. It can be seen to provide an efficient means of gathering information about the engagement of young musicians in practice and memorization on a large scale.

Keywords: cross-cultural, memorization, practice, questionnaire, young musicians

Procedia PDF Downloads 115
620 De-Densifying Congested Cores of Cities and Their Emerging Design Opportunities

Authors: Faith Abdul Rasak Asharaf

Abstract:

Every city has a threshold known as urban carrying capacity based on which it can withstand a particular density of people, above which the city might need to resort to measures like expanding its boundaries or growing vertically. As a result of this circumstance, the number of squatter communities is growing, as is the claustrophobic feeling of being confined inside a "concrete jungle." The expansion of suburbs, commercial areas, and industrial real estate in the areas surrounding medium-sized cities has resulted in changes to their landscapes and urban forms, as well as a systematic shift in their role in the urban hierarchy when functional endowment and connections to other territories are considered. The urban carrying capacity idea provides crucial guidance for city administrators and planners in better managing, designing, planning, constructing, and distributing urban resources to satisfy the huge demands of an evergrowing urban population. An ecological footprint is a criterion of urban carrying capacity, which is the amount of land required to provide humanity with renewable resources and absorb its trash. However, as each piece of land has its unique carrying capacity, including ecological, social, and economic considerations, these metropolitan areas begin to reach a saturation point over time. Various city models have been tried throughout the years to meet the increasing urban population density by moving the zones of work, life, and leisure to achieve maximum sustainable growth. The current scenario is that of a vertical city and compact city concept, in which the maximum density of people is attempted to fit into a definite area using efficient land use and a variety of other strategies, but this has proven to be a very unsustainable method of growth, as evidenced by the COVID-19 period. Due to a shortage of housing and basic infrastructure, densely populated cities gave rise to massive squatter communities, unable to accommodate the overflowing migrants. To achieve optimum carrying capacity, planning measures such as polycentric city and diffuse city concepts can be implemented, which will help to relieve the congested city core by relocating certain sectors of the town to the city periphery, which will help to create newer spaces for design in terms of public space, transportation, and housing, which is a major concern in the current scenario. The study's goal is focused on suggesting design options and solutions in terms of placemaking for better urban quality and urban life for the citizens once city centres have been de-densified based on urban carrying capacity and ecological footprint, taking the case of Kochi as an apt example of a highly densified city core, focusing on Edappally, which is an agglomeration of many urban factors.

Keywords: urban carrying capacity, urbanization, urban sprawl, ecological footprint

Procedia PDF Downloads 68
619 Sexual and Reproductive Rights After the Signing of the Peace Process: A Territorial Commitment

Authors: Rocio Murad, Juan Carlos Rivillas, Nury Alejandra Rodriguez, Daniela Roldán

Abstract:

In Colombia, around 5 million women have suffered forced displacement and all forms of gender-based violence, mostly adolescents and young women, single mothers, or widows with children affected by the war. After the signing of the peace agreements, the department of Antioquia has been one of the most affected by the armed conflict, from a territorial and gender perspective in the period. The objective of the research was to analyze the situation of sexual and reproductive rights in the department of Antioquia from a territorial and gender perspective in the period after the signing of the Peace Agreement. A mixed methodology was developed. The quantitative component conducted a cross-sectional descriptive study of barriers to access to contraceptive methods, safe abortion and gender-based violence based on microdata from the 2015 National Demographic and Health Survey. In the qualitative component, a case study was developed in Dabeiba, a municipality of Antioquia prioritized in order to deepen the experiences before, during and after the armed conflict in sexual and reproductive rights; using three research techniques: Focused observation, Semi-structured interviews, and Documentary review. The results showed that there is a gradient of greater vulnerability to greater effects of the conflict and that the subregion of Urabá Antioqueño, to which Dabeiba belongs, has the highest levels of vulnerability in relation to departmental data. In this subregion, the percentage of women with an unmet need for contraceptive methods (9%), women with unintended pregnancies (31%), of women between 15 and 19 years of age who are already mothers or are pregnant with their first child (32%) and the percentage of women victims of physical violence (42%) and sexual violence (13%) by their partners are significantly higher. Women, particularly rural and indigenous women, were doubly affected due to the existence of violence that is specifically directed at them or that has a greater impact on their life projects. There was evidence of insufficient, fragmented and disjointed social and institutional action in relation to women's rights and the existence of androcentric and patriarchal social imaginaries through which women and the feminine are undervalued. These results provide evidence of violations of sexual and reproductive rights in contexts of armed conflict and make it possible to identify mechanisms to guarantee the re-establishment of the rights of the victims, particularly women and girls. Among the mechanisms evidenced are: working for the elimination of gender stereotypes; supporting the formation and strengthening of women's social organizations; working for the concerted definition and articulated implementation of actions necessary to respond to sexual and reproductive health needs; and working for the recognition of reproductive violence as specific and different from sexual violence in the context of armed conflict. Also, it was evidenced that it is necessary to implement prevention, attention and reparation actions.

Keywords: sexual and reproductive rights, Colombia, armed conflict, violence against women

Procedia PDF Downloads 79
618 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 139
617 Bone Mineralization in Children with Wilson’s Disease

Authors: Shiamaa Eltantawy, Gihan Sobhy, Alif Alaam

Abstract:

Wilson disease, or hepatolenticular degeneration, is an autosomal recessive disease that results in excess copper buildup in the body. It primarily affects the liver and basal ganglia of the brain, but it can affect other organ systems. Musculoskeletal abnormalities, including premature osteoarthritis, skeletal deformity, and pathological bone fractures, can occasionally be found in WD patients with a hepatic or neurologic type. The aim was to assess the prevalence of osteoporosis and osteopenia in Wilson’s disease patients. This case-control study was conducted on ninety children recruited from the inpatient ward and outpatient clinic of the Paediatric Hepatology, Gastroenterology, and Nutrition department of the National Liver Institute at Menofia University, aged from 1 to 18 years. Males were 49, and females were 41. Children were divided into three groups: (Group I) consisted of thirty patients with WD; (Group II) consisted of thirty patients with chronic liver disease other than WD; (Group III) consisted of thirty age- and sex-matched healthy The exclusion criteria were patients with hyperparathyroidism, hyperthyroidism, renal failure, Cushing's syndrome, and patients on certain drugs such as chemotherapy, anticonvulsants, or steroids. All patients were subjected to the following: 1- Full history-taking and clinical examination. 2-Laboratory investigations: (FBC,ALT,AST,serum albumin, total protein, total serum bilirubin,direct bilirubin,alkaline phosphatase, prothrombin time, serum critine,parathyroid hormone, serum calcium, serum phosphrus). 3-Bone mineral density (BMD, gm/cm2) values were measured by dual-energy X-ray absorptiometry (DEXA). The results revealed that there was a highly statistically significant difference between the three groups regarding the DEXA scan, and there was no statistically significant difference between groups I and II, but the WD group had the lowest bone mineral density. The WD group had a large number of cases of osteopenia and osteoporosis, but there was no statistically significant difference with the group II mean, while a high statistically significant difference was found when compared to group III. In the WD group, there were 20 patients with osteopenia, 4 patients with osteoporosis, and 6 patients who were normal. The percentages were 66.7%, 13.3%, and 20%, respectively. Therefore, the largest number of cases in the WD group had osteopenia. There was no statistically significant difference found between WD patients on different treatment regimens regarding DEXA scan results (Z-Score). There was no statistically significant difference found between patients in the WD group (normal, osteopenic, or osteoporotic) regarding phosphorus (mg/dL), but there was a highly statistically significant difference found between them regarding ionised Ca (mmol/L). Therefore, there was a decrease in bone mineral density when the Ca level was decreased. In summary, Wilson disease is associated with bone demineralization. The largest number of cases in the WD group in our study had osteopenia (66.7%). Different treatment regimens (zinc monotherapy, Artamin, and zinc) as well as different laboratory parameters have no effect on bone mineralization in WD cases. Decreased ionised Ca is associated with low BMD in WD patients. Children with WD should be investigated for BMD.

Keywords: wilson disease, Bone mineral density, liver disease, osteoporosis

Procedia PDF Downloads 44
616 Enhanced Dielectric and Ferroelectric Properties in Holmium Substituted Stoichiometric and Non-Stoichiometric SBT Ferroelectric Ceramics

Authors: Sugandha Gupta, Arun Kumar Jha

Abstract:

A large number of ferroelectric materials have been intensely investigated for applications in non-volatile ferroelectric random access memories (FeRAMs), piezoelectric transducers, actuators, pyroelectric sensors, high dielectric constant capacitors, etc. Bismuth layered ferroelectric materials such as Strontium Bismuth Tantalate (SBT) has attracted a lot of attention due to low leakage current, high remnant polarization and high fatigue endurance up to 1012 switching cycles. However, pure SBT suffers from various major limitations such as high dielectric loss, low remnant polarization values, high processing temperature, bismuth volatilization, etc. Significant efforts have been made to improve the dielectric and ferroelectric properties of this compound. Firstly, it has been reported that electrical properties vary with the Sr/ Bi content ratio in the SrBi2Ta2O9 compsition i.e. non-stoichiometric compositions with Sr-deficient / Bi excess content have higher remnant polarization values than stoichiometic SBT compositions. With the objective to improve structural, dielectric, ferroelectric and piezoelectric properties of SBT compound, rare earth holmium (Ho3+) was chosen as a donor cation for substitution onto the Bi2O2 layer. Moreover, hardly any report on holmium substitution in stoichiometric SrBi2Ta2O9 and non-stoichiometric Sr0.8Bi2.2Ta2O9 compositions were available in the literature. The holmium substituted SrBi2-xHoxTa2O9 (x= 0.00-2.0) and Sr0.8Bi2.2Ta2O9 (x=0.0 and 0.01) compositions were synthesized by the solid state reaction method. The synthesized specimens were characterized for their structural and electrical properties. X-ray diffractograms reveal single phase layered perovskite structure formation for holmium content in stoichiometric SBT samples up to x ≤ 0.1. The granular morphology of the samples was investigated using scanning electron microscope (Hitachi, S-3700 N). The dielectric measurements were carried out using a precision LCR meter (Agilent 4284A) operating at oscillation amplitude of 1V. The variation of dielectric constant with temperature shows that the Curie temperature (Tc) decreases on increasing the holmium content. The specimen with x=2.0 i.e. the bismuth free specimen, has very low dielectric constant and does not show any appreciable variation with temperature. The dielectric loss reduces significantly with holmium substitution. The polarization–electric field (P–E) hysteresis loops were recorded using a P–E loop tracer based on Sawyer–Tower circuit. It is observed that the ferroelectric property improve with Ho substitution. Holmium substituted specimen exhibits enhanced value of remnant polarization (Pr= 9.22 μC/cm²) as compared to holmium free specimen (Pr= 2.55 μC/cm²). Piezoelectric co-efficient (d33 values) was measured using a piezo meter system (Piezo Test PM300). It is observed that holmium substitution enhances piezoelectric coefficient. Further, the optimized holmium content (x=0.01) in stoichiometric SrBi2-xHoxTa2O9 composition has been substituted in non-stoichiometric Sr0.8Bi2.2Ta2O9 composition to obtain further enhanced structural and electrical characteristics. It is expected that a new class of ferroelectric materials i.e. Rare Earth Layered Structured Ferroelectrics (RLSF) derived from Bismuth Layered Structured Ferroelectrics (BLSF) will generate which can be used to replace static (SRAM) and dynamic (DRAM) random access memories with ferroelectric random access memories (FeRAMS).

Keywords: dielectrics, ferroelectrics, piezoelectrics, strontium bismuth tantalate

Procedia PDF Downloads 193
615 The Use of Emerging Technologies in Higher Education Institutions: A Case of Nelson Mandela University, South Africa

Authors: Ayanda P. Deliwe, Storm B. Watson

Abstract:

The COVID-19 pandemic has disrupted the established practices of higher education institutions (HEIs). Most higher education institutions worldwide had to shift from traditional face-to-face to online learning. The online environment and new online tools are disrupting the way in which higher education is presented. Furthermore, the structures of higher education institutions have been impacted by rapid advancements in information and communication technologies. Emerging technologies should not be viewed in a negative light because, as opposed to the traditional curriculum that worked to create productive and efficient researchers, emerging technologies encourage creativity and innovation. Therefore, using technology together with traditional means will enhance teaching and learning. Emerging technologies in higher education not only change the experience of students, lecturers, and the content, but it is also influencing the attraction and retention of students. Higher education institutions are under immense pressure because not only are they competing locally and nationally, but emerging technologies also expand the competition internationally. Emerging technologies have eliminated border barriers, allowing students to study in the country of their choice regardless of where they are in the world. Higher education institutions are becoming indifferent as technology is finding its way into the lecture room day by day. Academics need to utilise technology at their disposal if they want to get through to their students. Academics are now competing for students' attention with social media platforms such as WhatsApp, Snapchat, Instagram, Facebook, TikTok, and others. This is posing a significant challenge to higher education institutions. It is, therefore, critical to pay attention to emerging technologies in order to see how they can be incorporated into the classroom in order to improve educational quality while remaining relevant in the work industry. This study aims to understand how emerging technologies have been utilised at Nelson Mandela University in presenting teaching and learning activities since April 2020. The primary objective of this study is to analyse how academics are incorporating emerging technologies in their teaching and learning activities. This primary objective was achieved by conducting a literature review on clarifying and conceptualising the emerging technologies being utilised by higher education institutions, reviewing and analysing the use of emerging technologies, and will further be investigated through an empirical analysis of the use of emerging technologies at Nelson Mandela University. Findings from the literature review revealed that emerging technology is impacting several key areas in higher education institutions, such as the attraction and retention of students, enhancement of teaching and learning, increase in global competition, elimination of border barriers, and highlighting the digital divide. The literature review further identified that learning management systems, open educational resources, learning analytics, and artificial intelligence are the most prevalent emerging technologies being used in higher education institutions. The identified emerging technologies will be further analysed through an empirical analysis to identify how they are being utilised at Nelson Mandela University.

Keywords: artificial intelligence, emerging technologies, learning analytics, learner management systems, open educational resources

Procedia PDF Downloads 60
614 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 93
613 Top-Down, Middle-Out, Bottom-Up: A Design Approach to Transforming Prison

Authors: Roland F. Karthaus, Rachel S. O'Brien

Abstract:

Over the past decade, the authors have undertaken applied research aimed at enabling transformation within the prison service to improve conditions and outcomes for those living, working and visiting in prisons in the UK and the communities they serve. The research has taken place against a context of reducing resources and public discontent at increasing levels of violence, deteriorating conditions and persistently high levels of re-offending. Top-down governmental policies have mainly been ineffectual and in some cases counter-productive. The prison service is characterised by hierarchical organisation, and the research has applied design thinking at multiple levels to challenge and precipitate change: top-down, middle-out and bottom-up. The research employs three distinct but related approaches, system design (top-down): working at the national policy level to analyse the changing policy context, identifying opportunities and challenges; engaging with the Ministry of Justice commissioners and sector organisations to facilitate debate, introducing new evidence and provoking creative thinking, place-based design (middle-out): working with individual prison establishments as pilots to illustrate and test the potential for local empowerment, creative change, and improved architecture within place-specific contexts and organisational hierarchies, everyday design (bottom-up): working with individuals in the system to explore the potential for localised, significant, demonstrator changes; including collaborative design, capacity building and empowerment in skills, employment, communication, training, and other activities. The research spans a series of projects, through which the methodological approach has developed responsively. The projects include a place-based model for the re-purposing of Ministry of Justice land assets for the purposes of rehabilitation; an evidence-based guide to improve prison design for health and well-being; capacity-based employment, skills and self-build project as a template for future open prisons. The overarching research has enabled knowledge to be developed and disseminated through policy and academic networks. Whilst the research remains live and continuing; key findings are emerging as a basis for a new methodological approach to effecting change in the UK prison service. An interdisciplinary approach is necessary to overcome the barriers between distinct areas of the prison service. Sometimes referred to as total environments, prisons encompass entire social and physical environments which themselves are orchestrated by institutional arms of government, resulting in complex systems that cannot be meaningfully engaged through narrow disciplinary lenses. A scalar approach is necessary to connect strategic policies with individual experiences and potential, through the medium of individual prison establishments, operating as discrete entities within the system. A reflexive process is necessary to connect research with action in a responsive mode, learning to adapt as the system itself is changing. The role of individuals in the system, their latent knowledge and experience and their ability to engage and become agents of change are essential. Whilst the specific characteristics of the UK prison system are unique, the approach is internationally applicable.

Keywords: architecture, design, policy, prison, system, transformation

Procedia PDF Downloads 124
612 Doctor-Patient Interaction in an L2: Pragmatic Study of a Nigerian Experience

Authors: Ayodele James Akinola

Abstract:

This study investigated the use of English in doctor-patient interaction in a university teaching hospital from a southwestern state in Nigeria with the aim of identifying the role of communication in an L2, patterns of communication, discourse strategies, pragmatic acts, and contexts that shape the interaction. Jacob Mey’s Pragmatic Acts notion complemented with Emanuel and Emanuel’s model of doctor-patient relationship provided the theoretical standpoint. Data comprising 7 audio-recorded doctors-patient interactions were collected from a University Hospital in Oyo state, Nigeria. Interactions involving the use of English language were purposefully selected. These were supplemented with patients’ case notes and interviews conducted with doctors. Transcription was patterned alongside modified Arminen’s notations of conversation analysis. In the study, interaction in English between doctor and patients has the preponderance of direct-translation, code-mixing and switching, Nigerianism and use of cultural worldviews to express medical experience. Irrespective of these, three patterns communication, namely the paternalistic, interpretive, and deliberative were identified. These were exhibited through varying discourse strategies. The paternalistic model reflected slightly casual conversational conventions and registers. These were achieved through the pragmemic activities of situated speech acts, psychological and physical acts, via patients’ quarrel-induced acts, controlled and managed through doctors’ shared situation knowledge. All these produced empathising, pacifying, promising and instructing practs. The patients’ practs were explaining, provoking, associating and greeting in the paternalistic model. The informative model reveals the use of adjacency pairs, formal turn-taking, precise detailing, institutional talks and dialogic strategies. Through the activities of the speech, prosody and physical acts, the practs of declaring, alerting and informing were utilised by doctors, while the patients exploited adapting, requesting and selecting practs. The negotiating conversational strategy of the deliberative model featured in the speech, prosody and physical acts. In this model, practs of suggesting, teaching, persuading and convincing were utilised by the doctors. The patients deployed the practs of questioning, demanding, considering and deciding. The contextual variables revealed that other patterns (such as phatic and informative) are also used and they coalesced in the hospital within the situational and psychological contexts. However, the paternalistic model was predominantly employed by doctors with over six years in practice, while the interpretive, informative and deliberative models were found among registrar and others below six years of medical practice. Doctors’ experience, patients’ peculiarities and shared cultural knowledge influenced doctor-patient communication in the study.

Keywords: pragmatics, communication pattern, doctor-patient interaction, Nigerian hospital situation

Procedia PDF Downloads 168
611 Citation Analysis of New Zealand Court Decisions

Authors: Tobias Milz, L. Macpherson, Varvara Vetrova

Abstract:

The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.

Keywords: case citation network, citation analysis, network analysis, Neo4j

Procedia PDF Downloads 94