Search results for: moment frame
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1716

Search results for: moment frame

336 Stereotyping of Non-Western Students in Western Universities: Applying Critical Discourse Analysis to Undermine Educational Hegemony

Authors: Susan Lubbers

Abstract:

This study applies critical discourse analysis to the language used by educators to frame international students of Asian backgrounds in Anglo-Western universities as quiet, shy, passive and unable to think critically. Emphasis is on the self-promoted ‘internationalised’ Australian tertiary context, where negative stereotypes are commonly voiced not only in the academy but also in the media. Parallels are drawn as well with other Anglo-Western educational contexts. The study critically compares the discourse of these persistent negative stereotypes, with in-class and interview discourses of international students of Asian and Western language, cultural and educational backgrounds enrolled in a Media and Popular Culture unit in an Australian university. The focus of analysis of the student discourse is on their engagement in critical dialogic interactions on the topics of culture and interculturality. The evidence is also drawn from student interviews and focus groups and from observation of whole-class discussion participation rates. The findings of the research project provide evidence that counters the myth of student as problem. They point rather to the widespread lack of intercultural awareness of Western educators and students as being at the heart of the negative perceptions of students of Asian backgrounds. The study suggests the efficacy of an approach to developing intercultural competence that is embedded, or integrated, into tertiary programs. The presentation includes an overview of the main strategies that have been developed by the tertiary educator (author) to support the development of intercultural competence of and among the student cohort. The evidence points to the importance of developing intercultural competence among tertiary educators and students. The failure by educators to ensure that the diverse voices, ideas and perspectives of students from all cultural, educational and language backgrounds are heard in our classrooms means that our universities can hardly be regarded or promoted as genuinely internationalised. They will continue as undemocratic institutions that perpetrate persistent Western educational hegemony.

Keywords: critical discourse analysis, critical thinking, embedding, intercultural competence, interculturality, international student, internationalised education

Procedia PDF Downloads 271
335 12 Real Forensic Caseworks Solved by the DNA STR-Typing of Skeletal Remains Exposed to Extremely Environment Conditions without the Conventional Bone Pulverization Step

Authors: Chiara Della Rocca, Gavino Piras, Andrea Berti, Alessandro Mameli

Abstract:

DNA identification of human skeletal remains plays a valuable role in the forensic field, especially in missing persons and mass disaster investigations. Hard tissues, such as bones and teeth, represent a very common kind of samples analyzed in forensic laboratories because they are often the only biological materials remaining. However, the major limitation of using these compact samples relies on the extremely time–consuming and labor–intensive treatment of grinding them into powder before proceeding with the conventional DNA purification and extraction step. In this context, a DNA extraction assay called the TBone Ex kit (DNA Chip Research Inc.) was developed to digest bone chips without powdering. Here, we simultaneously analyzed bone and tooth samples that arrived at our police laboratory and belonged to 15 different forensic casework that occurred in Sardinia (Italy). A total of 27 samples were recovered from different scenarios and were exposed to extreme environmental factors, including sunlight, seawater, soil, fauna, vegetation, and high temperature and humidity. The TBone Ex kit was used prior to the EZ2 DNA extraction kit on the EZ2 Connect Fx instrument (Qiagen), and high-quality autosomal and Y-chromosome STRs profiles were obtained for the 80% of the caseworks in an extremely short time frame. This study provides additional support for the use of the TBone Ex kit for digesting bone fragments/whole teeth as an effective alternative to pulverization protocols. We empirically demonstrated the effectiveness of the kit in processing multiple bone samples simultaneously, largely simplifying the DNA extraction procedure and the good yield of recovered DNA for downstream genetic typing in highly compromised forensic real specimens. In conclusion, this study turns out to be extremely useful for forensic laboratories, to which the various actors of the criminal justice system – such as potential jury members, judges, defense attorneys, and prosecutors – required immediate feedback.

Keywords: DNA, skeletal remains, bones, tbone ex kit, extreme conditions

Procedia PDF Downloads 1
334 Men’s Attendance in Labour and Birth Room: A Choice and Coercion in Childbirth

Authors: A/Prof Marjan Khajehei

Abstract:

In the last century, the role of fathers in the birth has changed exponentially. Before the 1970s, the principal view was that birth was a female business and not a man’s place. Changing cultural and professional attitudes around the emotional bond between a man and a woman, family structure and the more proactive involved role of men in the family have encouraged fathers’ attendance at birth. There is evidence that fathers’ support can make birthing less traumatic for some women and can make couples closer. This has made some clinicians to believe the fathers should be more involved throughout the birth process. Some clinicians even go further and ask the fathers to watch the medical procedures, such as inserting vaginal speculum, forceps or vacuum, episiotomy and stitches. Although birth can unfold like a beautiful picture captured by birth photographers, with fathers massaging women’s backs by candle light and the miraculous moment of birth, it can be overshadowed by less attractive images of cervical mucous, emptying bowels and the invasive medical procedures. What happens in the birth room and the fathers’ reaction to the graphic experience of birthing can be unpredictable. Despite the fact that most men are absolutely thrilled to be in the delivery room, for some men, a very intimate body part can become completely desexualised, and they can experience psychological and sexual scarring. They see someone they cherish dramatically sliced open and can then associate their partners with a disturbing scene, and it can dramatically affect their relationships. While most women want the expectant fathers by their side for this life-changing event, not all of them may be happy for their partners to watch the perineum to be cut or stitched or when large blades of forceps are inserted inside the vagina. Anecdotal reports have shown that consent is not sought from the labouring women as to whether they want their partners to watch these procedures. The majority of research1, 2, 3 focuses on men’s and women’s retrospective attitudes towards their birth experience. However, the effect of witnessing invasive procedures during childbirth on a man's attraction to his partner, while she is most vulnerable, and also an increased risk of post-traumatic stress disorder in fathers have not been widely investigated. There is a lack of sufficient research investigating whether women need to be asked for their consent before inviting their partners to closely watch medical procedures during childbirth. Future research is required to provide a basis for better awareness and involve the consumers to understanding the men’s and women’s experience and their expectations for labour and birth.

Keywords: birth, childbirth, father, labour, men, women

Procedia PDF Downloads 103
333 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II

Authors: Heerak Banerjee, Sourov Roy

Abstract:

Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.

Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry

Procedia PDF Downloads 106
332 Soft Skills: Expectations and Needs in Tourism

Authors: Susana Silva, Dora Martins

Abstract:

The recent political, economic, social technological and employment changes significantly affect the tourism organizations and consequently the changing nature of the employment experience of the tourism workforce. Such scene leads several researchers and labor analysts to reflect about what kinds of jobs, knowledge and competences are need to ensure the success to teach, to learning and to work on this sector. In recent years the competency-based approach in high education level has become of significant interest. On the one hand, this approach could leads to the forming of the key students’ competences which contribute their better preparation to the professional future and on the other hand could answer better to practical demands from tourism job market. The goals of this paper are (1) to understand the expectations of university tourism students in relation to the present and future tourism competences demands, (2) to identify the importance put on the soft skills, (3) to know the importance of high qualification to their future professional activity and (4) to explore the students perception about present and future tourist sector specificities. To this proposal, a questionnaire was designed and distributed to every students who participate on classes of Hospitality Management under degree and master from one public Portuguese university. All participants were invited, during December 2014 and September 2015, to answer the questionnaire at the moment and on presence of one researcher of this study. Fulfilled the questionnaire 202 students (72, 35,6% male and 130, 64.4% female), the mean age was 21,64 (SD=5,27), 91% (n=86) were undergraduate and 18 (9%) were master students. 80% (n=162) of our participants refers as a possibility to look for a job outside the country.42% (n=85) prefers to work in a medium-sized tourism units (with 50-249 employees). According to our participants the most valued skills in tourism are the domain of foreign languages (87.6%, n=177), the ability to work as a team (85%), the personal persistence (83%, n=168), the knowledge of the product/services provided (73.8%, n=149), and assertiveness (66.3%, n=134). 65% (n=131) refers the availability to look for a job in a home distance of 1000 kilometers and 59% (n=119) do not consider the possibility to work in another area than tourism. From the results of this study we are in the position of confirming the need for universities to maintain a better link with the professional tourism companies and to rethink some competences into their learning course model. Based on our results students, universities and companies could understand more deeply the motivations, expectations and competences need to build the future career who study and work on the tourism sector.

Keywords: human capital, employability, students’ competencies perceptions, soft skills, tourism

Procedia PDF Downloads 236
331 Medical Authorizations for Cannabis-Based Products in Canada: Sante Cannabis Data on Patient’s Safety and Treatment Profiles

Authors: Rihab Gamaoun, Cynthia El Hage, Laura Ruiz, Erin Prosk, Antonio Vigano

Abstract:

Introduction: Santé Cannabis (SC), a Canadian medical cannabis-specialized group of clinics based in Montreal and in the province of Québec, has served more than 5000 patients seeking cannabis-based treatment prescription for medical indications over the past five years. Within a research frame, data on the use of medical cannabis products from all the above patients were prospectively collected, leading to a large real-world database on the use of medical cannabis. The aim of this study was to gather information on the profiles of both patients and prescribed medical cannabis products at SC clinics and to assess the safety of medical cannabis among Canadian patients. Methods: Using a retrospective analysis of the database, records of 2585 patients who were prescribed medical cannabis products for therapeutic purposes between 01-November 2017 and 04-September 2019 were included. Patients’ demographics, primary diagnosis, route of administration, and chemovars recorded at the initial visits were investigated. Results: At baseline: 9% of SC patients were female, with a mean age of 57 (SD= 15.8, range= [18-96]); Cannabis products were prescribed mainly for patients with a diagnosis of chronic pain (65.9% of patients), cancer (9.4%), neurological disorders (6.5%), mood disorders (5.8 %) and inflammatory diseases (4.1%). Route of administration and chemovars of prescribed cannabis products were the following: 96% of patients received cannabis oil (51% CBD rich, 42.5% CBD:THC); 32.1% dried cannabis (21.3% CBD:THC, 7.4% THC rich, 3.4 CBD rich), and 2.1% oral spray cannabis (1.1% CBD:THC, 0.8% CBD rich, 0.2% THC rich). Most patients were prescribed simultaneously, a combination of products with different administration routes and chemovars. Safety analysis is undergoing. Conclusion: Our results provided initial information on the profile of medical cannabis products prescribed in a Canadian population and the experienced adverse events over the past three years. The Santé Cannabis database represents a unique opportunity for comparing clinical practices in prescribing and titrating cannabis-based medications across different centers. Ultimately real-world data, including information about safety and effectiveness, will help to create standardized and validated guidelines for choosing dose, route of administration, and chemovars types for the cannabis-based medication in different diseases and indications.

Keywords: medical cannabis, real-world data, safety, pharmacovigilance

Procedia PDF Downloads 84
330 Incident Management System: An Essential Tool for Oil Spill Response

Authors: Ali Heyder Alatas, D. Xin, L. Nai Ming

Abstract:

An oil spill emergency can vary in size and complexity, subject to factors such as volume and characteristics of spilled oil, incident location, impacted sensitivities and resources required. A major incident typically involves numerous stakeholders; these include the responsible party, response organisations, government authorities across multiple jurisdictions, local communities, and a spectrum of technical experts. An incident management team will encounter numerous challenges. Factors such as limited access to location, adverse weather, poor communication, and lack of pre-identified resources can impede a response; delays caused by an inefficient response can exacerbate impacts caused to the wider environment, socio-economic and cultural resources. It is essential that all parties work based on defined roles, responsibilities and authority, and ensure the availability of sufficient resources. To promote steadfast coordination and overcome the challenges highlighted, an Incident Management System (IMS) offers an essential tool for oil spill response. It provides clarity in command and control, improves communication and coordination, facilitates the cooperation between stakeholders, and integrates resources committed. Following the preceding discussion, a comprehensive review of existing literature serves to illustrate the application of IMS in oil spill response to overcome common challenges faced in a major-scaled incident. With a primary audience comprising practitioners in mind, this study will discuss key principles of incident management which enables an effective response, along with pitfalls and challenges, particularly, the tension between government and industry; case studies will be used to frame learning and issues consolidated from previous research, and provide the context to link practice with theory. It will also feature the industry approach to incident management which was further crystallized as part of a review by the Joint Industry Project (JIP) established in the wake of the Macondo well control incident. The authors posit that a common IMS which can be adopted across the industry not only enhances response capacity towards a major oil spill incident but is essential to the global preparedness effort.

Keywords: command and control, incident management system, oil spill response, response organisation

Procedia PDF Downloads 133
329 Efficiently Dispersed MnOx on Mesoporous 3D Cubic Support for Cyclohexene Epoxidation

Authors: G. Imran, A. Pandurangan

Abstract:

Epoxides constitute important intermediates for the production of fine and bulk chemicals as well as valuable building blocks for the synthesis of a variety of bioactive molecules. Manganese oxides are used as selective catalyst for various redox type reactions and also effectively used in the field of catalytic disposal of pollutants. Non-toxic, cost efficient factor and more over existence of wide range of oxidation state (+2 to +7) makes catalyst more interesting for both academic research and industrial applications. However, the serious drawback lying is the lower surface area. Exceedingly dispersed manganese oxide grafted over mesoporous solid material KIT-6 through ALD (Atomic Layer Deposition) technique effectively catalyze cyclohexene with H2O2 (30% in water) to corresponding epoxides. Highly selective epoxide >99% with 55.7% conversion of cyclohexene was achieved using huge dispersed active sites of MnOx species containing catalysts. Various weight percent such as (1, 3, 5, 7 & 10 wt %) of manganese (II) acetylacetonate complex was employed as Mn source to post-graft via active silanol groups of KIT-6 and are designated as (Mn-G-KIT-6). XRD, N2 sorption, HR-TEM, DRS-UV-VIS, EPR and H2-TPR were employed for structural and textural properties. Immense Mn species of about 95% proportion on silica matrix obtained was evident from ICP-OES.The resulting materials exhibited Type IV adsorption isotherms indiacting mesopore in nanorange. Si-KIT-6 and Mn-G-KIT-6 materials exhibited surface area of 519-289 m2/g and with decrease in pore volume of 0.96-0.49 cm3/g with pore diameter ranging 7.9- 7.2 with increase in wt%. DRS-UV-VIS spectroscopy and EPR studies reveal that manganese coexists as Mn2+/3+ species as extra-framework sites and frame-work sites that result in dispersion on surface of silica matrix of KIT-6 and incorporated manganese sites with silanol groups along with small sized MnO cluster, evident from HR-TEM which increase with Mn content. Conventional production of epoxides by the intramolecular etherification of chlorohydrins formed by the reaction of alkenes with hypochlorous acid is the major drawbacks obtained recently. The most efficient synthesis of oxiranes (epoxides) is obtained by mesoporous catalysts (Mn-G-KIT-6) are presented here and discussed.

Keywords: ALD, epoxidation, mesoporous, MnOx

Procedia PDF Downloads 164
328 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure

Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik

Abstract:

Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.

Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT

Procedia PDF Downloads 106
327 Nostalgia in Photographed Books for Children – the Case of Photography Books of Children in the Kibbutz

Authors: Ayala Amir

Abstract:

The paper presents interdisciplinary research which draws on the literary study and the cultural study of photography to explore a literary genre defined by nostalgia – the photographed book for children. This genre, which was popular in the second half of the 20th century, presents the romantic, nostalgic image of childhood created in the visual arts in the 18th century (as suggested by Ann Higonnet). At the same time, it capitalizes on the nostalgia inherent in the event of photography as formulated by Jennifer Green-Lewis: photography frames a moment in the present while transforming it into a past longed for in the future. Unlike Freudian melancholy, nostalgia is an effect that enables representation by acknowledging the loss and containing it in the very experience of the object. The representation and preservation of the lost object (nature, childhood, innocence) are in the center of the genre of children's photography books – a modern version of ancient pastoral. In it, the unique synergia of word and image results in a nostalgic image of childhood in an era already conquered by modernization. The nostalgic effect works both in the representation of space – an Edenic image of nature already shadowed by its demise, and of time – an image of childhood imbued by what Gill Bartholnyes calls the "looking backward aesthetics" – under the sign of loss. Little critical attention has been devoted to this genre with the exception of the work of Bettina Kümmerling-Meibauer, who noted the nostalgic effect of the well-known series of photography books by Astrid Lindgren and Anna Riwkin-Brick. This research aims to elaborate Kümmerling-Meibauer's approach using the theories of the study of photography, word-image studies, as well as current studies of childhood. The theoretical perspectives are implemented in the case study of photography books created in one of the most innovative social structures in our time – the Israeli Kibbutz. This communal way of life designed a society where children will experience their childhood in a parentless rural environment that will save them from the fate of the Oedipal fall. It is suggested that in documenting these children in a fictional format, photographers and writers, images and words cooperated in creating nostalgic works situated on the border between nature and culture, imagination and reality, utopia and its realization in history.

Keywords: nostalgia, photography , childhood, children's books, kibutz

Procedia PDF Downloads 117
326 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case

Authors: Andrea Ando

Abstract:

The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.

Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law

Procedia PDF Downloads 65
325 Apollo Quality Program: The Essential Framework for Implementing Patient Safety

Authors: Anupam Sibal

Abstract:

Apollo Quality Program(AQP) was launched across the Apollo Group of Hospitals to address the four patient safety areas; Safety during Clinical Handovers, Medication Safety, Surgical Safety and the six International Patient Safety Goals(IPSGs) of JCI. A measurable, online, quality dashboard covering 20 process and outcome parameters was devised for monthly monitoring. The expected outcomes were also defined and categorized into green, yellow and red ranges. An audit methodology was also devised to check the processes for the measurable dashboard. Documented clinical handovers were introduced for the first time at many locations for in-house patient transfer, nursing-handover, and physician-handover. Prototype forms using the SBAR format were made. Patient-identifiers, read-back for verbal orders, safety of high-alert medications, site marking and time-outs and falls risk-assessment were introduced for all hospitals irrespective of accreditation status. Measurement of Surgical-Site-Infection (SSI) for 30 days postoperatively, was done. All hospitals now tracked the time of administration of antimicrobial prophylaxis before surgery. Situations with high risk of retention of foreign body were delineated and precautionary measures instituted. Audit of medications prescribed in the discharge summaries was made uniform. Formularies, prescription-audits and other means for reduction of medication errors were implemented. There is a marked increase in the compliance to processes and patient safety outcomes. Compliance to read-back for verbal orders rose from 86.83% in April’11 to 96.95% in June’15, to policy for high alert medications from 87.83% to 98.82%, to use of measures to prevent wrong-site, wrong-patient, wrong procedure surgery from 85.75% to 97.66%, to hand-washing from 69.18% to 92.54%, to antimicrobial prophylaxis within one hour before incision from 79.43% to 93.46%. Percentage of patients excluded from SSI calculation due to lack of follow-up for the requisite time frame decreased from 21.25% to 10.25%. The average AQP scores for all Apollo Hospitals improved from 62 in April’11 to 87.7 in Jun’15.

Keywords: clinical handovers, international patient safety goals, medication safety, surgical safety

Procedia PDF Downloads 238
324 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 243
323 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 50
322 Walking in a Weather rather than a Climate: Critique on the Meta-Narrative of Buddhism in Early India

Authors: Yongjun Kim

Abstract:

Since the agreement on the historicity of historical Buddha in eastern India, the beginning, heyday and decline of Buddhism in Early India have been discussed in urbanization, commercialism and state formation context, in short, Weberian socio-politico frame. Recent Scholarship, notably in archaeology and anthropology, has proposed ‘re-materialization of Buddhism in Early India’ based on what Buddhist had actually done rather than what they should do according to canonical teachings or philosophies. But its historical narrations still remain with a domain of socio-politico meta-narrative which tends to unjustifiably dismiss the naturally existing heterogeneity and often chaotic dynamic of diverse agencies, landscape perceptions, localized traditions, etc. An author will argue the multiplicity of theoretical standpoints for the reconstruction on the Buddhism in Early India. For this, at first, the diverse agencies, localized traditions, landscape patterns of Buddhist communities and monasteries in Trans-Himalayan regions; focusing Zanskar Valley and Spiti Valley in India will be illustrated based on an author’s field work. And then an author will discuss this anthropological landscape analysis is better appropriated with textual and archaeological evidences on the tension between urban monastic and forest Buddhism, the phenomena of sacred landscape, cemetery, garden, natural cave along with socio-economic landscape, the demographic heterogeneity in Early India. Finally, it will be attempted to compare between anthropological landscape of present Trans-Himalayan and archaeological one of ancient Western India. The study of Buddhism in Early India has hardly been discussed through multivalent theoretical archaeology and anthropology of religion, thus traditional and recent scholarship have produced historical meta-narrative though heterogeneous among them. The multidisciplinary approaches of textual critics, archaeology and anthropology will surely help to deconstruct the grand and all-encompassing historical description on Buddhism in Early India and then to reconstruct the localized, behavioral and multivalent narratives. This paper expects to highlight the importance of lesser-studied Buddhist archaeological sites and the dynamic views on religious landscape in Early India with a help of critical anthropology of religion.

Keywords: analogy by living traditions, Buddhism in Early India, landscape analysis, meta-narrative

Procedia PDF Downloads 310
321 Communication in the Sciences: A Discourse Analysis of Biology Research Articles and Magazine Articles

Authors: Gayani Ranawake

Abstract:

Effective communication is widely regarded as an important aspect of any discipline. This particular study deals with written communication in science. Writing conventions and linguistic choices play a key role in conveying the message effectively to a target audience. Scientists are responsible for conveying their findings or research results not only to their discourse community but also to the general public. Recognizing appropriate linguistic choices is crucial since they vary depending on the target audience. The majority of scientists can communicate effectively with their discourse community, but public engagement seems more challenging to them. There is a lack of research into the language use of scientists, and in particular how it varies by discipline and audience (genre). A better understanding of the different linguistic conventions used in effective science writing by scientists for scientists and by scientists for the public will help to guide scientists who are familiar with their discourse community norms to write effectively for the public. This study investigates the differences and similarities of linguistic choices in biology articles written by scientists for their discourse community and biology magazine articles written by scientists and science communicators for the general public. This study is a part of a larger project investigating linguistic differences in different genres of science academic writing. The sample for this particular study is composed of 20 research articles from the journal Biological Reviews and 20 magazine articles from the magazine Australian Popular Science. Differences in the linguistic devices were analyzed using Hyland’s metadiscourse model for academic writing proposed in 2005. The frequency of the usage of interactive resources (transitions, frame markers, endophoric markers, evidentials and code glosses) and interactional resources (hedges, boosters, attitude markers, self-mentions and engagement markers) were compared and contrasted using the NVivo textual analysis tool. The results clearly show the differences in the frequency of usage of interactional and interactive resources in the two disciplines under investigation. The findings of this study provide a reference guide for scientists and science writers to understand the differences in the linguistic choices between the two genres. This will be particularly helpful for scientists who are proficient at writing for their discourse community, but not for the public.

Keywords: discourse analysis, linguistic choices, metadiscourse, science writing

Procedia PDF Downloads 115
320 Emerging Technologies for Learning: In Need of a Pro-Active Educational Strategy

Authors: Pieter De Vries, Renate Klaassen, Maria Ioannides

Abstract:

This paper is about an explorative research into the use of emerging technologies for teaching and learning in higher engineering education. The assumption is that these technologies and applications, which are not yet widely adopted, will help to improve education and as such actively work on the ability to better deal with the mismatch of skills bothering our industries. Technologies such as 3D printing, the Internet of Things, Virtual Reality, and others, are in a dynamic state of development which makes it difficult to grasp the value for education. Also, the instruments in current educational research seem not appropriate to assess the value of such technologies. This explorative research aims to foster an approach to better deal with this new complexity. The need to find out is urgent, because these technologies will be dominantly present in the near future in all aspects of life, including education. The methodology used in this research comprised an inventory of emerging technologies and tools that potentially give way to innovation and are used or about to be used in technical universities. The inventory was based on both a literature review and a review of reports and web resources like blogs and others and included a series of interviews with stakeholders in engineering education and at representative industries. In addition, a number of small experiments were executed with the aim to analyze the requirements for the use of in this case Virtual Reality and the Internet of Things to better understanding the opportunities and limitations in the day-today learning environment. The major findings indicate that it is rather difficult to decide about the value of these technologies for education due to the dynamic state of change and therefor unpredictability and the lack of a coherent policy at the institutions. Most decisions are being made by teachers on an individual basis, who in their micro-environment are not equipped to select, test and ultimately decide about the use of these technologies. Most experiences are being made in the industry knowing that the skills to handle these technologies are in high demand. The industry though is worried about the inclination and the capability of education to help bridge the skills gap related to the emergence of new technologies. Due to the complexity, the diversity, the speed of development and the decay, education is challenged to develop an approach that can make these technologies work in an integrated fashion. For education to fully profit from the opportunities, these technologies offer it is eminent to develop a pro-active strategy and a sustainable approach to frame the emerging technologies development.

Keywords: emerging technologies, internet of things, pro-active strategy, virtual reality

Procedia PDF Downloads 162
319 Exercise and Geriatric Depression: a Scoping Review of the Research Evidence

Authors: Samira Mehrabi

Abstract:

Geriatric depression is a common late-life mental health disorder that increases morbidity and mortality. It has been shown that exercise is effective in alleviating symptoms of geriatric depression. However, inconsistencies across studies and lack of optimal dose-response of exercise for improving geriatric depression have made it challenging to draw solid conclusions on the effectiveness of exercise in late-life depression. Purpose: To further investigate the moderators of the effectiveness of exercise on geriatric depression across the current body of evidence. Methods: Based on the Arksey and O’Malley framework, an extensive search strategy was performed by exploring PubMed, Scopus, Sport Discus, PsycInfo, ERIC, and IBSS without limitations in the time frame. Eight systematic reviews with empirical results that evaluated the effect of exercise on depression among people aged ≥ 60 years were identified and their individual studies were screened for inclusion. One additional study was found through the hand searching of reference lists. After full-text screening and applying inclusion and exclusion criteria, 21 studies were retained for inclusion. Results: The review revealed high variability in characteristics of the exercise interventions and outcome measures. Sample characteristics, nature of comparators, main outcome assessment, and baseline severity of depression also varied notably. Mind-body and aerobic exercises were found to significantly reduce geriatric depression. However, results on the relationship between resistance training and improvements in geriatric depression were inconsistent, and results of the intensity-related antidepressant effects of exercise interventions were mixed. Extensive use of self-reported questionnaires for the main outcome assessment and lack of evidence on the relationship between depression severity and observed effects were of the other important highlights of the review. Conclusion: Several literature gaps were found regarding the potential effect modifiers of exercise and geriatric depression. While acknowledging the complexity of establishing recommendations on the exercise variables and geriatric depression, future studies are required to understand the interplay and threshold effect of exercise for treating geriatric depression.

Keywords: exercise, geriatric depression, healthy aging, older adults, physical activity intervention, scoping review

Procedia PDF Downloads 81
318 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 56
317 The Impact of Bilateral Investment Treaties on Health-Related Intellectual Property Rights in the Agreement on Trade-Related Aspects of Intellectual Property Rights in the Kingdom of Saudi Arabia and Australia

Authors: Abdulrahman Fahim M. Alsulami

Abstract:

This paper is dedicated to a detailed investigation of the interaction between the agreement on trade-related aspects of intellectual property rights (TRIPS) and bilateral investment treaties (BITs) in the regulation of health-related intellectual property rights in Australia and the Kingdom of Saudi Arabia. The chosen research object is complex and requires a thorough examination of a set of factors influencing the problem under investigation. At the moment, to the author’s best knowledge’ there is no academic research that would conceptualize and critically compare the regulation of health-related intellectual property rights in these two countries. While there is a substantial amount of information in the literature on certain aspects of the problem, the existing knowledge about certain aspects of the health-related regulatory frameworks in Australia and Saudi Arabia barely explains in detail the specifics of the ways in which the TRIPS agreement interacts with (BITs) in the regulation of health-related intellectual property rights. Therefore, this paper will address an evident research gap by studying an intriguing yet under-researched problem. The paper comprises five subsections. The first subsection provides an overview of the investment climate in Saudi Arabia and Australia with an emphasis on the health care industry. It will cover political, economic, and social factors influencing the investment climate in these countries, the systems of intellectual property rights protection, recent patterns relevant to the investment climate’s development, and key characteristics of the investment climate in the health care industry. The second subsection analyses BITs in Saudi Arabia and Australia in light of the countries’ responsibilities under the TRIPS Agreement. The third subsection provides a critical examination of the interaction between the TRIPS Agreement and BITs in Saudi Arabia on the basis of data collected and analyzed in previous subsections. It will investigate key discrepancies concerning the regulation of health-related intellectual property rights in Saudi Arabia and Australia from the position of BITs’ interaction with the TRIPS Agreement and explore the existing procedures for clarifying priorities between them in regulating health-related intellectual property rights. The fourth subsection of the paper provides recommendations concerning the transformation of BITS into a TRIPS+ dimension in regulating health-related intellectual property rights in Saudi Arabia and Australia. The final subsection provides a summary of differences between the Australian and Saudi BITs from the perspective of the regulation of health-related intellectual property rights under the TRIPS agreement and bilateral investment treaties.

Keywords: Australia, bilateral investment treaties, IP law, public health sector, Saudi Arabia

Procedia PDF Downloads 116
316 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling

Authors: Ahmad Odeh, Ahmad Jrade

Abstract:

Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.

Keywords: building information modelling, energy, life cycle analysis, sustainablity

Procedia PDF Downloads 248
315 An Exploratory Study of Wellbeing in Irish Primary Schools towards Developing a Shared Understanding amongst Teachers

Authors: Margaret Nohilly, Fionnuala Tynan

Abstract:

Wellbeing in not only a national priority in Ireland but in the international context. A review of the literature highlights the consistent efforts of researchers to define the concept of wellbeing. This study sought to explore the understating of Wellbeing in Irish primary schools. National Wellbeing Guidelines in the Irish context frame the concept of wellbeing through a mental health paradigm, which is but one aspect of wellbeing. This exploratory research sought the views of Irish primary school teachers on their understanding of the concept of wellbeing and the practical application of strategies to promote wellbeing both in the classroom and across the school. Teacher participants from four counties in the West of Ireland were invited to participate in focus group discussion and workshops through the Education Centre Network. The purpose of this process was twofold; firstly to explore teachers’ understanding of wellbeing in the primary school context and, secondly, for teachers to be co-creators in the development of practical strategies for classroom and whole school implementation. The voice of the teacher participants was central to the research design. The findings of this study indicate that the definition of wellbeing in the Irish context is too abstract a definition for teachers and the focus on mental health dominates the discourse in relation to wellbeing. Few teachers felt that they were addressing wellbeing adequately in their classrooms and across the school. The findings from the focus groups highlighted that while teachers are incorporating a range of wellbeing strategies including mindfulness and positive psychology, there is a clear disconnect between the national definition and the implementation of national curricula which causes them concern. The teacher participants requested further practical strategies to promote wellbeing at whole school and classroom level within the framework of the Irish Primary School Curriculum and enable them to become professionally confident in developing a culture of wellbeing. In conclusion, considering wellbeing is a national priority in Ireland, this research promoted the timely discussion the wellbeing guidelines and the development of a conceptual framework to define wellbeing in concrete terms for practitioners. The centrality of teacher voices ensured the strategies proposed by this research is both practical and effective. The findings of this research have prompted the development of a national resource which will support the implementation of wellbeing in the primary school at both national and international level.

Keywords: definition, wellbeing, strategies, curriculum

Procedia PDF Downloads 374
314 35 MHz Coherent Plane Wave Compounding High Frequency Ultrasound Imaging

Authors: Chih-Chung Huang, Po-Hsun Peng

Abstract:

Ultrasound transient elastography has become a valuable tool for many clinical diagnoses, such as liver diseases and breast cancer. The pathological tissue can be distinguished by elastography due to its stiffness is different from surrounding normal tissues. An ultrafast frame rate of ultrasound imaging is needed for transient elastography modality. The elastography obtained in the ultrafast system suffers from a low quality for resolution, and affects the robustness of the transient elastography. In order to overcome these problems, a coherent plane wave compounding technique has been proposed for conventional ultrasound system which the operating frequency is around 3-15 MHz. The purpose of this study is to develop a novel beamforming technique for high frequency ultrasound coherent plane-wave compounding imaging and the simulated results will provide the standards for hardware developments. Plane-wave compounding imaging produces a series of low-resolution images, which fires whole elements of an array transducer in one shot with different inclination angles and receives the echoes by conventional beamforming, and compounds them coherently. Simulations of plane-wave compounding image and focused transmit image were performed using Field II. All images were produced by point spread functions (PSFs) and cyst phantoms with a 64-element linear array working at 35MHz center frequency, 55% bandwidth, and pitch of 0.05 mm. The F number is 1.55 in all the simulations. The simulated results of PSFs and cyst phantom which were obtained using single, 17, 43 angles plane wave transmission (angle of each plane wave is separated by 0.75 degree), and focused transmission. The resolution and contrast of image were improved with the number of angles of firing plane wave. The lateral resolutions for different methods were measured by -10 dB lateral beam width. Comparison of the plane-wave compounding image and focused transmit image, both images exhibited the same lateral resolution of 70 um as 37 angles were performed. The lateral resolution can reach 55 um as the plane-wave was compounded 47 angles. All the results show the potential of using high-frequency plane-wave compound imaging for realizing the elastic properties of the microstructure tissue, such as eye, skin and vessel walls in the future.

Keywords: plane wave imaging, high frequency ultrasound, elastography, beamforming

Procedia PDF Downloads 504
313 Flipping the Script: Opportunities, Challenges, and Threats of a Digital Revolution in Higher Education

Authors: James P. Takona

Abstract:

In a world that is experiencing sharp digital transformations guided by digital technologies, the potential of technology to drive transformation and evolution in the higher is apparent. Higher education is facing a paradigm shift that exposes susceptibilities and threats to fully online programs in the face of post-Covid-19 trends of commodification. This historical moment is likely to be remembered as a critical turning point from analog to digital degree-focused learning modalities, where the default became the pivot point of competition between higher education institutions. Fall 2020 marks a significant inflection point in higher education as students, educators, and government leaders scrutinize higher education's price and value propositions through the new lens of traditional lecture halls versus multiple digitized delivery modes. Online education has since tiled the way for a pedagogical shift in how teachers teach and students learn. The incremental growth of online education in the west can now be attributed to the increasing patronage among students, faculty, and institution administrators. More often than not, college instructors assume paraclete roles in this learning mode, while students become active collaborators and no longer passive learners. This paper offers valuable discernments into the threats, challenges, and opportunities of a massive digital revolution in servicing degree programs. To view digital instruction and learning demands for instructional practices that revolve around collaborative work, engaging students in learning activities, and an engagement that promotes active efforts to solicit strong connections between course activities and expected learning pace for all students. Appropriate digital technologies demand instructors and students need prior solid skills. Need for the use of digital technology to support instruction and learning, intelligent tutoring offers great promise, and failures at implementing digital learning may not improve outcomes for specific student populations. Digital learning benefits students differently depending on their circumstances and background and those of the institution and/or program. Students have alternative options, access to the convenience of learning anytime and anywhere, and the possibility of acquiring and developing new skills leading to lifelong learning.

Keywords: digi̇tized learning, digital education, collaborative work, high education, online education, digitize delivery

Procedia PDF Downloads 62
312 Code Mixing and Code-Switching Patterns in Kannada-English Bilingual Children and Adults Who Stutter

Authors: Vasupradaa Manivannan, Santosh Maruthy

Abstract:

Background/Aims: Preliminary evidence suggests that code-switching and code-mixing may act as one of the voluntary coping behavior to avoid the stuttering characteristics in children and adults; however, less is known about the types and patterns of code-mixing (CM) and code-switching (CS). Further, it is not known how it is different between children to adults who stutter. This study aimed to identify and compare the CM and CS patterns between Kannada-English bilingual children and adults who stutter. Method: A standard group comparison was made between five children who stutter (CWS) in the age range of 9-13 years and five adults who stutter (AWS) in the age range of 20-25 years. The participants who are proficient in Kannada (first language- L1) and English (second language- L2) were considered for the study. There were two tasks given to both the groups, a) General conversation (GC) with 10 random questions, b) Narration task (NAR) (Story / General Topic, for example., A Memorable Life Event) in three different conditions {Mono Kannada (MK), Mono English (ME), and Bilingual (BIL) Condition}. The children and adults were assessed online (via Zoom session) with a high-quality internet connection. The audio and video samples of the full assessment session were auto-recorded and manually transcribed. The recorded samples were analyzed for the percentage of dysfluencies using SSI-4 and CM, and CS exhibited in each participant using Matrix Language Frame (MLF) model parameters. The obtained data were analyzed using the Statistical Package for the Social Sciences (SPSS) software package (Version 20.0). Results: The mean, median, and standard deviation values were obtained for the percentage of dysfluencies (%SS) and frequency of CM and CS in Kannada-English bilingual children and adults who stutter for various parameters obtained through the MLF model. The inferential results indicated that %SS significantly varied between population (AWS vs CWS), languages (L1 vs L2), and tasks (GC vs NAR) but not across free (BIL) and bound (MK, ME) conditions. It was also found that the frequency of CM and CS patterns varies between CWS and AWS. The AWS had a lesser %SS but greater use of CS patterns than CWS, which is due to their excessive coping skills. The language mixing patterns were more observed in L1 than L2, and it was significant in most of the MLF parameters. However, there was a significantly higher (P<0.05) %SS in L2 than L1. The CS and CS patterns were more in conditions 1 and 3 than 2, which may be due to the higher proficiency of L2 than L1. Conclusion: The findings highlight the importance of assessing the CM and CS behaviors, their patterns, and the frequency of CM and CS between CWS and AWS on MLF parameters in two different tasks across three conditions. The results help us to understand CM and CS strategies in bilingual persons who stutter.

Keywords: bilinguals, code mixing, code switching, stuttering

Procedia PDF Downloads 52
311 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 46
310 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction

Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso

Abstract:

The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.

Keywords: LiDAR, OBIA, remote sensing, local scale

Procedia PDF Downloads 256
309 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective

Authors: Noury Bakrim

Abstract:

Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).

Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence

Procedia PDF Downloads 99
308 Genetic Improvement Potential for Wood Production in Melaleuca cajuputi

Authors: Hong Nguyen Thi Hai, Ryota Konda, Dat Kieu Tuan, Cao Tran Thanh, Khang Phung Van, Hau Tran Tin, Harry Wu

Abstract:

Melaleuca cajuputi is a moderately fast-growing species and considered as a multi-purpose tree as it provides fuelwood, piles and frame poles in construction, leaf essential oil and honey. It occurs in Australia, Papua New Guinea, and South-East Asia. M. cajuputi plantation can be harvested on 6-7 year rotations for wood products. Its timber can also be used for pulp and paper, fiber and particle board, producing quality charcoal and potentially sawn timber. However, most reported M. cajuputi breeding programs have been focused on oil production rather than wood production. In this study, breeding program of M. cajuputi aimed to improve wood production was examined by estimating genetic parameters for growth (tree height, diameter at breast height (DBH), and volume), stem form, stiffness (modulus of elasticity (MOE)), bark thickness and bark ratio in a half-sib family progeny trial including 80 families in the Mekong Delta of Vietnam. MOE is one of the key wood properties of interest to the wood industry. Non-destructive wood stiffness was measured indirectly by acoustic velocity using FAKOPP Microsecond Timer and especially unaffected by bark mass. Narrow-sense heritability for the seven traits ranged from 0.13 to 0.27 at age 7 years. MOE and stem form had positive genetic correlations with growth while the negative correlation between bark ratio and growth was also favorable. Breeding for simultaneous improvement of multiple traits, faster growth with higher MOE and reduction of bark ratio should be possible in M. cajuputi. Index selection based on volume and MOE showed genetic gains of 31 % in volume, 6 % in MOE and 13 % in stem form. In addition, heritability and age-age genetic correlations for growth traits increased with time and optimal early selection age for growth of M. cajuputi based on DBH alone was 4 years. Selected thinning resulted in an increase of heritability due to considerable reduction of phenotypic variation but little effect on genetic variation.

Keywords: acoustic velocity, age-age correlation, bark thickness, heritability, Melaleuca cajuputi, stiffness, thinning effect

Procedia PDF Downloads 148
307 Reasons for Food Losses and Waste in Basic Production of Meat Sector in Poland

Authors: Sylwia Laba, Robert Laba, Krystian Szczepanski, Mikolaj Niedek, Anna Kaminska-Dworznicka

Abstract:

Meat and its products are considered food products, having the most unfavorable effect on the environment that requires rational management of these products and waste, originating throughout the whole chain of manufacture, processing, transport, and trade of meat. From the economic and environmental viewpoints, it is important to limit the losses and food wastage and the food waste in the whole meat sector. The link to basic production includes obtaining raw meat, i.e., animal breeding, management, and transport of animals to the slaughterhouse. Food is any substance or product, intended to be consumed by humans. It was determined (for the needs of the present studies) when the raw material is considered as a food. It is the moment when the animals are prepared to loading with the aim to be transported to a slaughterhouse and utilized for food purposes. The aim of the studies was to determine the reasons for loss generation in the basic production of the meat sector in Poland during the years 2017 – 2018. The studies on food losses and waste in the meat sector in basic production were carried out in two areas: red meat i.e., pork and beef and poultry meat. The studies of basic production were conducted in the period of March-May 2019 at the territory of the whole country on a representative trial of 278 farms, including 102 pork production, 55–beef production, and 121 poultry meat production. The surveys were carried out with the utilization of questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted direct questionnaire interviews. Research results indicate that it is followed that any losses were not recorded during the preparation, loading, and transport of the animals to the slaughterhouse in 33% of the visited farms. In the farms where the losses were indicated, the crushing and suffocations, occurring during the production of pigs, beef cattle and poultry, were the main reasons for these losses. They constituted ca. 40% of the reported reasons. The stress generated by loading and transport caused 16 – 17% (depending on the season of the year) of the loss reasons. In the case of poultry production, in 2017, additionally, 10.7% of losses were caused by inappropriate conditions of loading and transportation, while in 2018 – 11.8%. The diseases were one of the reasons for the losses in pork and beef production (7% of the losses). The losses and waste, generated during livestock production and in meat processing and trade cannot be managed or recovered. They have to be disposed of. It is, therefore, important to prevent and minimize the losses throughout the whole production chain. It is possible to introduce the appropriate measures, connected mainly with the appropriate conditions and methods of animal loading and transport.

Keywords: food losses, food waste, livestock production, meat sector

Procedia PDF Downloads 112