Search results for: interference checking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 757

Search results for: interference checking

67 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 60
66 A Regulator's Assessment of Consumer Risk When Evaluating a User Test for an Umbrella Brand Name in an over the Counter Medicine

Authors: A. Bhatt, C. Bassi, H. Farragher, J. Musk

Abstract:

Background: All medicines placed on the EU market are legally required to be accompanied by labeling and package leaflet, which provide comprehensive information, enabling its safe and appropriate use. Mock-ups with results of assessments using a target patient group must be submitted for a marketing authorisation application. Consumers need confidence in non-prescription, OTC medicines in order to manage their minor ailments and umbrella brands assist purchasing decisions by assisting easy identification within a particular therapeutic area. A number of regulatory agencies have risk management tools and guidelines to assist in developing umbrella brands for OTC medicines, however assessment and decision making is subjective and inconsistent. This study presents an evaluation in the UK following the US FDA warning concerning methaemoglobinaemia following 21 reported cases (11 children under 2 years) caused by OTC oral analgesics containing benzocaine. METHODS: A standard face to face, 25 structured task based user interview testing methodology using a standard questionnaire and rating scale in consumers aged 15-91 years, was conducted independently between June and October 2015 in their homes. Whether individuals could discriminate between the labelling, safety information and warnings on cartons and PILs between 3 different OTC medicines packs with the same umbrella name was evaluated. Each pack was presented with differing information hierarchy using, different coloured cartons, containing the 3 different active ingredients, benzocaine (oromucosal spray) and two lozenges containing 2, 4, dichlorobenzyl alcohol, amylmetacresol and hexylresorcinol respectively (for the symptomatic relief of sore throat pain). The test was designed to determine whether warnings on the carton and leaflet were prominent, accessible to alert users that one product contained benzocaine, risk of methaemoglobinaemia, and refer to the leaflet for the signs of the condition and what to do should this occur. Results: Two consumers did not locate the warnings on the side of the pack, eventually found them on the back and two suggestions to further improve accessibility of the methaemoglobinaemia warning. Using a gold pack design for the oromucosal spray, all consumers could differentiate between the 3 drugs, minimum age particulars, pharmaceutical form and the risk factor methaemoglobinaemia. The warnings for benzocaine were deemed to be clear or very clear; appearance of the 3 packs were either very well differentiated or quite well differentiated. The PIL test passed on all criteria. All consumers could use the product correctly, identify risk factors ensuring the critical information necessary for the safe use was legible and easily accessible so that confusion and errors were minimised. Conclusion: Patients with known methaemoglobinaemia are likely to be vigilant in checking for benzocaine containing products, despite similar umbrella brand names across a range of active ingredients. Despite these findings, the package design and spray format were not deemed to be sufficient to mitigate potential safety risks associated with differences in target populations and contraindications when submitted to the Regulatory Agency. Although risk management tools are increasingly being used by agencies to assist in providing objective assurance of package safety, further transparency, reduction in subjectivity and proportionate risk should be demonstrated.

Keywords: labelling, OTC, risk, user testing

Procedia PDF Downloads 309
65 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories

Authors: Berna Çalışkan

Abstract:

The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.

Keywords: water resources management, hydro tool, water protection, transportation

Procedia PDF Downloads 58
64 Pickering Dry Emulsion System for Dissolution Enhancement of Poorly Water Soluble Drug (Fenofibrate)

Authors: Nitin Jadhav, Pradeep R. Vavia

Abstract:

Poor water soluble drugs are difficult to promote for oral drug delivery as they demonstrate poor and variable bioavailability because of its poor solubility and dissolution in GIT fluid. Nowadays lipid based formulations especially self microemulsifying drug delivery system (SMEDDS) is found as the most effective technique. With all the impressive advantages, the need of high amount of surfactant (50% - 80%) is the major drawback of SMEDDS. High concentration of synthetic surfactant is known for irritation in GIT and also interference with the function of intestinal transporters causes changes in drug absorption. Surfactant may also reduce drug activity and subsequently bioavailability due to the enhanced entrapment of drug in micelles. In chronic treatment these issues are very conspicuous due to the long exposure. In addition the liquid self microemulsifying system also suffers from stability issues. Recently one novel approach of solid stabilized micro and nano emulsion (Pickering emulsion) has very admirable properties such as high stability, absence or very less concentration of surfactant and easily converts into the dry form. So here we are exploring pickering dry emulsion system for dissolution enhancement of anti-lipemic, extremely poorly water soluble drug (Fenofibrate). Oil moiety for emulsion preparation was selected mainly on the basis of higher solubility of drug. Captex 300 was showed higher solubility for fenofibrate, hence selected as oil for emulsion. With Silica (solid stabilizer); Span 20 was selected to improve the wetting property of it. Emulsion formed by Silica and Span20 as stabilizer at the ratio 2.5:1 (silica: span 20) was found very stable at the particle size 410 nm. The prepared emulsion was further preceded for spray drying and formed microcapsule evaluated for in-vitro dissolution study, in-vivo pharmacodynamic study and characterized for DSC, XRD, FTIR, SEM, optical microscopy etc. The in vitro study exhibits significant dissolution enhancement of formulation (85 % in 45 minutes) as compared to plain drug (14 % in 45 minutes). In-vivo study (Triton based hyperlipidaemia model) exhibits significant reduction in triglyceride and cholesterol with formulation as compared to plain drug indicating increasing in fenofibrate bioavailability. DSC and XRD study exhibit loss of crystallinity of drug in microcapsule form. FTIR study exhibit chemical stability of fenofibrate. SEM and optical microscopy study exhibit spherical structure of globule coated with solid particles.

Keywords: captex 300, fenofibrate, pickering dry emulsion, silica, span20, stability, surfactant

Procedia PDF Downloads 499
63 Intelligent Indoor Localization Using WLAN Fingerprinting

Authors: Gideon C. Joseph

Abstract:

The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.

Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression

Procedia PDF Downloads 349
62 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study

Authors: Insiya Bhalloo

Abstract:

It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.

Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition

Procedia PDF Downloads 358
61 Problems and Solutions in the Application of ICP-MS for Analysis of Trace Elements in Various Samples

Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Áron Soós, Xénia Vágó, Dávid Andrási

Abstract:

In agriculture for analysis of elements in different food and food raw materials, moreover environmental samples generally flame atomic absorption spectrometers (FAAS), graphite furnace atomic absorption spectrometers (GF-AAS), inductively coupled plasma optical emission spectrometers (ICP-OES) and inductively coupled plasma mass spectrometers (ICP-MS) are routinely applied. An inductively coupled plasma mass spectrometer (ICP-MS) is capable for analysis of 70-80 elements in multielemental mode, from 1-5 cm3 volume of a sample, moreover the detection limits of elements are in µg/kg-ng/kg (ppb-ppt) concentration range. All the analytical instruments have different physical and chemical interfering effects analysing the above types of samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays there is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better (smaller) detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium, arsenic, germanium, vanadium and chromium. To elaborate an analytical method for trace elements with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) Physical interferences; 2) Spectral interferences (elemental and molecular isobaric); 3) Effect of easily ionisable elements; 4) Memory interferences. Analysing food and food raw materials, moreover environmental samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food and food raw materials, moreover environmental samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of the applied elements. So finally we could find “opportunities” to decrease or eliminate the error of the analyses of applied elements (Cr, Co, Ni, Cu, Zn, Ge, As, Se, Mo, Cd, Sn, Sb, Te, Hg, Pb, Bi). To analyse these elements in the above samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of the above elements, which can be corrected using different internal standards.

Keywords: elements, environmental and food samples, ICP-MS, interference effects

Procedia PDF Downloads 504
60 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 298
59 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 114
58 Reagentless Detection of Urea Based on ZnO-CuO Composite Thin Film

Authors: Neha Batra Bali, Monika Tomar, Vinay Gupta

Abstract:

A reagentless biosensor for detection of urea based on ZnO-CuO composite thin film is presented in following work. Biosensors have immense potential for varied applications ranging from environmental to clinical testing, health care, and cell analysis. Immense growth in the field of biosensors is due to the huge requirement in today’s world to develop techniques which are both cost effective and accurate for prevention of disease manifestation. The human body comprises of numerous biomolecules which in their optimum levels are essential for functioning. However mismanaged levels of these biomolecules result in major health issues. Urea is one of the key biomolecules of interest. Its estimation is of paramount significance not only for healthcare sector but also from environmental perspectives. If level of urea in human blood/serum is abnormal, i.e., above or below physiological range (15-40mg/dl)), it may lead to diseases like renal failure, hepatic failure, nephritic syndrome, cachexia, urinary tract obstruction, dehydration, shock, burns and gastrointestinal, etc. Various metal nanoparticles, conducting polymer, metal oxide thin films, etc. have been exploited to act as matrix to immobilize urease to fabricate urea biosensor. Amongst them, Zinc Oxide (ZnO), a semiconductor metal oxide with a wide band gap is of immense interest as an efficient matrix in biosensors by virtue of its natural abundance, biocompatibility, good electron communication feature and high isoelectric point (9.5). In spite of being such an attractive candidate, ZnO does not possess a redox couple of its own which necessitates the use of electroactive mediators for electron transfer between the enzyme and the electrode, thereby causing hindrance in realization of integrated and implantable biosensor. In the present work, an effort has been made to fabricate a matrix based on ZnO-CuO composite prepared by pulsed laser deposition (PLD) technique in order to incorporate redox properties in ZnO matrix and to utilize the same for reagentless biosensing applications. The prepared bioelectrode Urs/(ZnO-CuO)/ITO/glass exhibits high sensitivity (70µAmM⁻¹cm⁻²) for detection of urea (5-200 mg/dl) with high stability (shelf life ˃ 10 weeks) and good selectivity (interference ˂ 4%). The enhanced sensing response obtained for composite matrix is attributed to the efficient electron exchange between ZnO-CuO matrix and immobilized enzymes, and subsequently fast transfer of generated electrons to the electrode via matrix. The response is encouraging for fabricating reagentless urea biosensor based on ZnO-CuO matrix.

Keywords: biosensor, reagentless, urea, ZnO-CuO composite

Procedia PDF Downloads 290
57 Interaction between Cognitive Control and Language Processing in Non-Fluent Aphasia

Authors: Izabella Szollosi, Klara Marton

Abstract:

Aphasia can be defined as a weakness in accessing linguistic information. Accessing linguistic information is strongly related to information processing, which in turn is associated with the cognitive control system. According to the literature, a deficit in the cognitive control system interferes with language processing and contributes to non-fluent speech performance. The aim of our study was to explore this hypothesis by investigating how cognitive control interacts with language performance in participants with non-fluent aphasia. Cognitive control is a complex construct that includes working memory (WM) and the ability to resist proactive interference (PI). Based on previous research, we hypothesized that impairments in domain-general (DG) cognitive control abilities have negative effects on language processing. In contrast, better DG cognitive control functioning supports goal-directed behavior in language-related processes as well. Since stroke itself might slow down information processing, it is important to examine its negative effects on both cognitive control and language processing. Participants (N=52) in our study were individuals with non-fluent Broca’s aphasia (N = 13), with transcortical motor aphasia (N=13), individuals with stroke damage without aphasia (N=13), and unimpaired speakers (N = 13). All participants performed various computer-based tasks targeting cognitive control functions such as WM and resistance to PI in both linguistic and non-linguistic domains. Non-linguistic tasks targeted primarily DG functions, while linguistic tasks targeted more domain specific (DS) processes. The results showed that participants with Broca’s aphasia differed from the other three groups in the non-linguistic tasks. They performed significantly worse even in the baseline conditions. In contrast, we found a different performance profile in the linguistic domain, where the control group differed from all three stroke-related groups. The three groups with impairment performed more poorly than the controls but similar to each other in the verbal baseline condition. In the more complex verbal PI condition, however, participants with Broca’s aphasia performed significantly worse than all the other groups. Participants with Broca’s aphasia demonstrated the most severe language impairment and the highest vulnerability in tasks measuring DG cognitive control functions. Results support the notion that the more severe the cognitive control impairment, the more severe the aphasia. Thus, our findings suggest a strong interaction between cognitive control and language. Individuals with the most severe and most general cognitive control deficit - participants with Broca’s aphasia - showed the most severe language impairment. Individuals with better DG cognitive control functions demonstrated better language performance. While all participants with stroke damage showed impaired cognitive control functions in the linguistic domain, participants with better language skills performed also better in tasks that measured non-linguistic cognitive control functions. The overall results indicate that the level of cognitive control deficit interacts with the language functions in individuals along with the language spectrum (from severe to no impairment). However, future research is needed to determine any directionality.

Keywords: cognitive control, information processing, language performance, non-fluent aphasia

Procedia PDF Downloads 123
56 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.

Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment

Procedia PDF Downloads 78
55 Modification of Magneto-Transport Properties of Ferrimagnetic Mn₄N Thin Films by Ni Substitution and Their Magnetic Compensation

Authors: Taro Komori, Toshiki Gushi, Akihito Anzai, Taku Hirose, Kaoru Toko, Shinji Isogami, Takashi Suemasu

Abstract:

Ferrimagnetic antiperovskite Mn₄₋ₓNiₓN thin film exhibits both small saturation magnetization and rather large perpendicular magnetic anisotropy (PMA) when x is small. Both of them are suitable features for application to current induced domain wall motion devices using spin transfer torque (STT). In this work, we successfully grew antiperovskite 30-nm-thick Mn₄₋ₓNiₓN epitaxial thin films on MgO(001) and STO(001) substrates by MBE in order to investigate their crystalline qualities and magnetic and magneto-transport properties. Crystalline qualities were investigated by X-ray diffraction (XRD). The magnetic properties were measured by vibrating sample magnetometer (VSM) at room temperature. Anomalous Hall effect was measured by physical properties measurement system. Both measurements were performed at room temperature. Temperature dependence of magnetization was measured by VSM-Superconducting quantum interference device. XRD patterns indicate epitaxial growth of Mn₄₋ₓNiₓN thin films on both substrates, ones on STO(001) especially have higher c-axis orientation thanks to greater lattice matching. According to VSM measurement, PMA was observed in Mn₄₋ₓNiₓN on MgO(001) when x ≤ 0.25 and on STO(001) when x ≤ 0.5, and MS decreased drastically with x. For example, MS of Mn₃.₉Ni₀.₁N on STO(001) was 47.4 emu/cm³. From the anomalous Hall resistivity (ρAH) of Mn₄₋ₓNiₓN thin films on STO(001) with the magnetic field perpendicular to the plane, we found out Mr/MS was about 1 when x ≤ 0.25, which suggests large magnetic domains in samples and suitable features for DW motion device application. In contrast, such square curves were not observed for Mn₄₋ₓNiₓN on MgO(001), which we attribute to difference in lattice matching. Furthermore, it’s notable that although the sign of ρAH was negative when x = 0 and 0.1, it reversed positive when x = 0.25 and 0.5. The similar reversal occurred for temperature dependence of magnetization. The magnetization of Mn₄₋ₓNiₓN on STO(001) increases with decreasing temperature when x = 0 and 0.1, while it decreases when x = 0.25. We considered that these reversals were caused by magnetic compensation which occurred in Mn₄₋ₓNiₓN between x = 0.1 and 0.25. We expect Mn atoms of Mn₄₋ₓNiₓN crystal have larger magnetic moments than Ni atoms do. The temperature dependence stated above can be explained if we assume that Ni atoms preferentially occupy the corner sites, and their magnetic moments have different temperature dependence from Mn atoms at the face-centered sites. At the compensation point, Mn₄₋ₓNiₓN is expected to show very efficient STT and ultrafast DW motion with small current density. What’s more, if angular momentum compensation is found, the efficiency will be best optimized. In order to prove the magnetic compensation, X-ray magnetic circular dichroism will be performed. Energy dispersive X-ray spectrometry is a candidate method to analyze the accurate composition ratio of samples.

Keywords: compensation, ferrimagnetism, Mn₄N, PMA

Procedia PDF Downloads 135
54 Linguistic Competence Analysis and the Development of Speaking Instructional Material

Authors: Felipa M. Rico

Abstract:

Linguistic oral competence plays a vital role in attaining effective communication. Since the English language is considered as universally used language and has a high demand skill needed in the work-place, mastery is the expected output from learners. To achieve this, learners should be given integrated differentiated tasks which help them develop and strengthen the expected skills. This study aimed to develop speaking instructional supplementary material to enhance the English linguistic competence of Grade 9 students in areas of pronunciation, intonation and stress, voice projection, diction and fluency. A descriptive analysis was utilized to analyze the speaking level of performance of the students in order to employ appropriate strategies. There were two sets of respondents: 178 Grade 9 students selected through a stratified sampling and chosen at random. The other set comprised English teachers who evaluated the usefulness of the devised teaching materials. A teacher conducted a speaking test and activities were employed to analyze the speaking needs of students. Observation and recordings were also used to evaluate the students’ performance. The findings revealed that the English pronunciation of the students was slightly unclear at times, but generally fair. There were lapses but generally they rated moderate in intonation and stress, because of other language interference. In terms of voice projection, students have erratic high volume pitch. For diction, the students’ ability to produce comprehensible language is limited, and as to fluency, the choice of vocabulary and use of structure were severely limited. Based on the students’ speaking needs analyses, the supplementary material devised was based on Nunan’s IM model, incorporating context of daily life and global work settings, considering the principle that language is best learned in the actual meaningful situation. To widen the mastery of skill, a rich learning environment, filled with a variety instructional material tends to foster faster acquisition of the requisite skills for sustained learning and development. The role of IM is to encourage information to stick in the learners’ mind, as what is seen is understood more than what is heard. Teachers say they found the IM “very useful.” This implied that English teachers could adopt the materials to improve the speaking skills of students. Further, teachers should provide varied opportunities for students to get involved in real life situations where they could take turns in asking and answering questions and share information related to the activities. This would minimize anxiety among students in the use of the English language.

Keywords: diction, fluency, intonation, instructional materials, linguistic competence

Procedia PDF Downloads 242
53 Profitability and Productivity Performance of the Selected Public Sector Banks in India

Authors: Sudipto Jana

Abstract:

Background and significance of the study: Banking industry performs as a catalyst for industrial growth and agricultural growth, however, as well involves the existence and welfare of the citizens. The banking system in India was described by unmatched growth and the recreation of bunch making in the pre-liberalization era. At the time of financial sector reforms Reserve Bank of India issued a regulatory norm concerning capital adequacy, income recognition, asset classification and provisioning that have increasingly precede meeting by means of the international paramount performs. Bank management ceaselessly manages the triumph, effectiveness, productivity and performance of the bank as good performance, high productivity and efficiency authorizes the triumph of the bank management targets as well as aims of bank. In a comparable move toward performance of any economy depends upon the expediency and effectiveness of its financial system of nation establishes its economic growth indicators. Profitability and productivity are the most important relevant parameters of any banking group. Keeping in view of this, this study examines the profitability and productivity performance of the selected public sector banks in India. Methodology: This study is based on secondary data obtained from Reserve Bank of India database for the periods between 2006 and 2015. This study purposively selects four types of commercial banks, namely, State Bank of India, United Bank of India, Punjab National Bank and Allahabad Bank. In order to analyze the performance with relation to profitability and productivity, productivity performance indicators in terms of capital adequacy ratio, burden ratio, business per employee, spread per employee and advances per employee and profitability performance indicators in terms of return on assets, return on equity, return on advances and return on branch have been considered. In the course of analysis, descriptive statistics, correlation statistics and multiple regression have been used. Major findings: Descriptive statistics indicate that productivity performance of State Bank of India is very satisfactory than other public sector banks in India. But management of productivity is unsatisfactory in case of all the public sector banks under study. Correlation statistics point out that profitability of the public sector banks are strongly positively related with productivity performance in case of all the public sector banks under study. Multiple regression test results show that when profitability increases profit per employee increases and net non-performing assets decreases. Concluding statements: Productivity and profitability performance of United Bank of India, Allahabad Bank and Punjab National Bank are unsatisfactory due to poor management of asset quality as well as management efficiency. It needs government’s interference so that profitability and productivity performance are increased in the near future.

Keywords: India, productivity, profitability, public sector banks

Procedia PDF Downloads 433
52 The MHz Frequency Range EM Induction Device Development and Experimental Study for Low Conductive Objects Detection

Authors: D. Kakulia, L. Shoshiashvili, G. Sapharishvili

Abstract:

The results of the study are related to the direction of plastic mine detection research using electromagnetic induction, the development of appropriate equipment, and the evaluation of expected results. Electromagnetic induction sensing is effectively used in the detection of metal objects in the soil and in the discrimination of unexploded ordnances. Metal objects interact well with a low-frequency alternating magnetic field. Their electromagnetic response can be detected at the low-frequency range even when they are placed in the ground. Detection of plastic things such as plastic mines by electromagnetic induction is associated with difficulties. The interaction of non-conducting bodies or low-conductive objects with a low-frequency alternating magnetic field is very weak. At the high-frequency range where already wave processes take place, the interaction increases. Interactions with other distant objects also increase. A complex interference picture is formed, and extraction of useful information also meets difficulties. Sensing by electromagnetic induction at the intermediate MHz frequency range is the subject of research. The concept of detecting plastic mines in this range can be based on the study of the electromagnetic response of non-conductive cavity in a low-conductivity environment or the detection of small metal components in plastic mines, taking into account constructive features. The detector node based on the amplitude and phase detector 'Analog Devices ad8302' has been developed for experimental studies. The node has two inputs. At one of the inputs, the node receives a sinusoidal signal from the generator, to which a transmitting coil is also connected. The receiver coil is attached to the second input of the node. The additional circuit provides an option to amplify the signal output from the receiver coil by 20 dB. The node has two outputs. The voltages obtained at the output reflect the ratio of the amplitudes and the phase difference of the input harmonic signals. Experimental measurements were performed in different positions of the transmitter and receiver coils at the frequency range 1-20 MHz. Arbitrary/Function Generator Tektronix AFG3052C and the eight-channel high-resolution oscilloscope PICOSCOPE 4824 were used in the experiments. Experimental measurements were also performed with a low-conductive test object. The results of the measurements and comparative analysis show the capabilities of the simple detector node and the prospects for its further development in this direction. The results of the experimental measurements are compared and analyzed with the results of appropriate computer modeling based on the method of auxiliary sources (MAS). The experimental measurements are driven using the MATLAB environment. Acknowledgment -This work was supported by Shota Rustaveli National Science Foundation (SRNSF) (Grant number: NFR 17_523).

Keywords: EM induction sensing, detector, plastic mines, remote sensing

Procedia PDF Downloads 149
51 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management

Authors: M. Shahab Uddin, Pennung Warnitchai

Abstract:

Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.

Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system

Procedia PDF Downloads 228
50 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions

Authors: Peter Carter

Abstract:

Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”

Keywords: climate change, climate change indicators, climate change trends, climate system change interactions

Procedia PDF Downloads 105
49 Disabled Graduate Students’ Experiences and Vision of Change for Higher Education: A Participatory Action Research Study

Authors: Emily Simone Doffing, Danielle Kohfeldt

Abstract:

Disabled students are underrepresented in graduate-level degree enrollment and completion. There is limited research on disabled students' progression during the pandemic. Disabled graduate students (DGS) face unique interpersonal and institutional barriers, yet, limited research explores these barriers, buffering facilitators, and aids to academic persistence. This study adopts an asset-based, embodied disability approach using the critical pedagogy theoretical framework instead of the deficit research approach. The Participatory Action Research (PAR) paradigm, the critical pedagogy theoretical framework, and emancipatory disability research share the same purpose -creating a socially just world through reciprocal learning. This study is one of few, if not the first, to center solely on DGS’ lived understanding using a Participatory Action Research (PAR) epistemology. With a PAR paradigm, participants and investigators work as a research team democratically at every stage of the research process. PAR has individual and systemic outcomes. PAR lessens the researcher-participant power gap and elevates a marginalized community’s knowledge as expertise for local change. PAR and critical pedagogy work toward enriching everyone involved with empowerment, civic engagement, knowledge proliferation, socio-cultural reflection, skills development, and active meaning-making. The PAR process unveils the tensions between disability and graduate school in policy and practice during the pandemic. Likewise, institutional and ideological tensions influence the PAR process. This project is recruiting 10 DGS until September through purposive and snowball sampling. DGS will collectively practice praxis during four monthly focus groups in the fall 2023 semester. Participant researchers can attend a focus group or an interview, both with field notes. September will be our orientation and first monthly meeting. It will include access needs check-ins, ice breakers, consent form review, a group agreement, PAR introduction, research ethics discussion, research goals, and potential research topics. October and November will be available for meetings for dialogues about lived experiences during our collaborative data collection. Our sessions can be semi-structured with “framing questions,” which would be revised together. Field notes include observations that cannot be captured through audio. December will focus on local social action planning and dissemination. Finally, in January, there will be a post-study focus group for students' reflections on their experiences of PAR. Iterative analysis methods include transcribed audio, reflexivity, memos, thematic coding, analytic triangulation, and member checking. This research follows qualitative rigor and quality criteria: credibility, transferability, confirmability, and psychopolitical validity. Results include potential tension points, social action, individual outcomes, and recommendations for conducting PAR. Tension points have three components: dubious practices, contestable knowledge, and conflict. The dissemination of PAR recommendations will aid and encourage researchers to conduct future PAR projects with the disabled community. Identified stakeholders will be informed of DGS’ insider knowledge to drive social sustainability.

Keywords: participatory action research, graduate school, disability, higher education

Procedia PDF Downloads 63
48 The Impact of Team Heterogeneity and Team Reflexivity on Entrepreneurial Decision -Making - Empirical Study in China

Authors: Chang Liu, Rui Xing, Liyan Tang, Guohong Wang

Abstract:

Entrepreneurial actions are based on entrepreneurial decisions. The quality of decisions influences entrepreneurial activities and subsequent new venture performance. Uncertainty of surroundings put heightened demands on the team as a whole, and each team member. Diverse team composition provides rich information, which a team can draw when making complex decisions. However, team heterogeneity may cause emotional conflicts, which is adverse to team outcomes. Thus, the effects of team heterogeneity on team outcomes are complex. Although team heterogeneity is an essential factor influencing entrepreneurial decision-making, there is a lack of empirical analysis on under what conditions team heterogeneity plays a positive role in promoting decision-making quality. Entrepreneurial teams always struggle with complex tasks. How a team shapes its teamwork is key in resolving constant issues. As a collective regulatory process, team reflexivity is characterized by continuous joint evaluation and discussion of team goals, strategies, and processes, and adapt them to current or anticipated circumstances. It enables diversified information to be shared and overtly discussed. Instead of hostile interpretation of opposite opinions team members take them as useful insights from different perspectives. Team reflexivity leads to better integration of expertise to avoid the interference of negative emotions and conflict. Therefore, we propose that team reflexivity is a conditional factor that influences the impact of team heterogeneity on high-quality entrepreneurial decisions. In this study, we identify team heterogeneity as a crucial determinant of entrepreneurial decision quality. Integrating the literature on decision-making and team heterogeneity, we investigate the relationship between team heterogeneity and entrepreneurial decision-making quality, treating team reflexivity as a moderator. We tested our hypotheses using the hierarchical regression method and the data gathered from 63 teams and 205 individual members from 45 new firms in China's first-tier cities such as Beijing, Shanghai, and Shenzhen. This research found that both teams' education heterogeneity and teams' functional background heterogeneity were significantly positively related to entrepreneurial decision-making quality, and the positive relation was stronger in teams with a high level of team reflexivity. While teams' specialization of education heterogeneity was negatively related to decision-making quality, and the negative relationship was weaker in teams with a high level of team reflexivity. We offer two contributions to decision-making and entrepreneurial team literatures. Firstly, our study enriches the understanding of the role of entrepreneurial team heterogeneity in entrepreneurial decision-making quality. Different from previous entrepreneurial decision-making literatures, which focus more on decision-making modes of entrepreneurs and the top management team, this study is a significant attempt to highlight that entrepreneurial team heterogeneity makes a unique contribution to generating high-quality entrepreneurial decisions. Secondly, this study introduced team reflexivity as the moderating variable, to explore the boundary conditions under which the entrepreneurial team heterogeneity play their roles.

Keywords: decision-making quality, entrepreneurial teams, education heterogeneity, functional background heterogeneity, specialization of education heterogeneity

Procedia PDF Downloads 119
47 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus

Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta

Abstract:

Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.

Keywords: co-infection, dengue, reproductive fitness, viral quantification

Procedia PDF Downloads 203
46 From Talk to Action-Tackling Africa’s Pollution and Climate Change Problem

Authors: Ngabirano Levis

Abstract:

One of Africa’s major environmental challenges remains air pollution. In 2017, UNICEF estimated over 400,000 children in Africa died as a result of indoor pollution, while 350 million children remain exposed to the risks of indoor pollution due to the use of biomass and burning of wood for cooking. Over time, indeed, the major causes of mortality across Africa are shifting from the unsafe water, poor sanitation, and malnutrition to the ambient and household indoor pollution, and greenhouse gas (GHG) emissions remain a key factor in this. In addition, studies by the OECD estimated that the economic cost of premature deaths due to Ambient Particulate Matter Pollution (APMP) and Household Air Pollution across Africa in 2013 was about 215 Billion US Dollars and US 232 Billion US Dollars, respectively. This is not only a huge cost for a continent where over 41% of the Sub-Saharan population lives on less than 1.9 US Dollars a day but also makes the people extremely vulnerable to the negative climate change and environmental degradation effects. Such impacts have led to extended droughts, flooding, health complications, and reduced crop yields hence food insecurity. Climate change, therefore, poses a threat to global targets like poverty reduction, health, and famine. Despite efforts towards mitigation, air contributors like carbon dioxide emissions are on a generally upward trajectory across Africa. In Egypt, for instance, emission levels had increased by over 141% in 2010 from the 1990 baseline. Efforts like the climate change adaptation and mitigation financing have also hit obstacles on the continent. The International Community and developed nations stress that Africa still faces challenges of limited human, institutional and financial systems capable of attracting climate funding from these developed economies. By using the qualitative multi-case study method supplemented by interviews of key actors and comprehensive textual analysis of relevant literature, this paper dissects the key emissions and air pollutant sources, their impact on the well-being of the African people, and puts forward suggestions as well as a remedial mechanism to these challenges. The findings reveal that whereas climate change mitigation plans appear comprehensive and good on paper for many African countries like Uganda; the lingering political interference, limited research guided planning, lack of population engagement, irrational resource allocation, and limited system and personnel capacity has largely impeded the realization of the set targets. Recommendations have been put forward to address the above climate change impacts that threaten the food security, health, and livelihoods of the people on the continent.

Keywords: Africa, air pollution, climate change, mitigation, emissions, effective planning, institutional strengthening

Procedia PDF Downloads 84
45 Tailoring Quantum Oscillations of Excitonic Schrodinger’s Cats as Qubits

Authors: Amit Bhunia, Mohit Kumar Singh, Maryam Al Huwayz, Mohamed Henini, Shouvik Datta

Abstract:

We report [https://arxiv.org/abs/2107.13518] experimental detection and control of Schrodinger’s Cat like macroscopically large, quantum coherent state of a two-component Bose-Einstein condensate of spatially indirect electron-hole pairs or excitons using a resonant tunneling diode of III-V Semiconductors. This provides access to millions of excitons as qubits to allow efficient, fault-tolerant quantum computation. In this work, we measure phase-coherent periodic oscillations in photo-generated capacitance as a function of an applied voltage bias and light intensity over a macroscopically large area. Periodic presence and absence of splitting of excitonic peaks in the optical spectra measured by photocapacitance point towards tunneling induced variations in capacitive coupling between the quantum well and quantum dots. Observation of negative ‘quantum capacitance’ due to a screening of charge carriers by the quantum well indicates Coulomb correlations of interacting excitons in the plane of the sample. We also establish that coherent resonant tunneling in this well-dot heterostructure restricts the available momentum space of the charge carriers within this quantum well. Consequently, the electric polarization vector of the associated indirect excitons collective orients along the direction of applied bias and these excitons undergo Bose-Einstein condensation below ~100 K. Generation of interference beats in photocapacitance oscillation even with incoherent white light further confirm the presence of stable, long-range spatial correlation among these indirect excitons. We finally demonstrate collective Rabi oscillations of these macroscopically large, ‘multipartite’, two-level, coupled and uncoupled quantum states of excitonic condensate as qubits. Therefore, our study not only brings the physics and technology of Bose-Einstein condensation within the reaches of semiconductor chips but also opens up experimental investigations of the fundamentals of quantum physics using similar techniques. Operational temperatures of such two-component excitonic BEC can be raised further with a more densely packed, ordered array of QDs and/or using materials having larger excitonic binding energies. However, fabrications of single crystals of 0D-2D heterostructures using 2D materials (e.g. transition metal di-chalcogenides, oxides, perovskites etc.) having higher excitonic binding energies are still an open challenge for semiconductor optoelectronics. As of now, these 0D-2D heterostructures can already be scaled up for mass production of miniaturized, portable quantum optoelectronic devices using the existing III-V and/or Nitride based semiconductor fabrication technologies.

Keywords: exciton, Bose-Einstein condensation, quantum computation, heterostructures, semiconductor Physics, quantum fluids, Schrodinger's Cat

Procedia PDF Downloads 180
44 Photoemission Momentum Microscopy of Graphene on Ir (111)

Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense

Abstract:

Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.

Keywords: band structure, graphene, momentum microscopy, LDAD

Procedia PDF Downloads 340
43 Wellbeing Effects from Family Literacy Education: An Ecological Study

Authors: Jane Furness, Neville Robertson, Judy Hunter, Darrin Hodgetts, Linda Nikora

Abstract:

Background and significance: This paper describes the first use of community psychology theories to investigate family-focused literacy education programmes, enabling a wide range of wellbeing effects of such programmes to be identified for the first time. Evaluations of family literacy programmes usually focus on the economic advantage of gains in literacy skills. By identifying other effects on aspects of participants’ lives that are important to them, and how they occur, understanding of how such programmes contribute to wellbeing and social justice is augmented. Drawn from community psychology, an ecological systems-based, culturally adaptive framework for personal, relational and collective wellbeing illuminated outcomes of family literacy programmes that enhanced wellbeing and quality of life for adult participants, their families and their communities. All programmes, irrespective of their institutional location, could be similarly scrutinized. Methodology: The study traced the experiences of nineteen adult participants in four family-focused literacy programmes located in geographically and culturally different communities throughout New Zealand. A critical social constructionist paradigm framed this interpretive study. Participants were mainly Māori, Pacific islands, or European New Zealanders. Seventy-nine repeated conversational interviews were conducted over 18 months with the adult participants, programme staff and people who knew the participants well. Twelve participant observations of programme sessions were conducted, and programme documentation was reviewed. Latent theoretical thematic analysis of data drew on broad perspectives of literacy and ecological systems theory, network theory and holistic, integrative theories of wellbeing. Steps taken to co-construct meaning with participants included the repeated conversational interviews and participant checking of interview transcripts and section drafts. The researcher (this paper’s first author) followed methodological guidelines developed by indigenous peoples for non-indigenous researchers. Findings: The study found that the four family literacy programmes, differing in structure, content, aims and foci, nevertheless shared common principles and practices that reflected programme staff’s overarching concern for people’s wellbeing along with their desire to enhance literacy abilities. A human rights and strengths-based based view of people based on respect for diverse culturally based values and practices were evident in staff expression of their values and beliefs and in their practices. This enacted stance influenced the outcomes of programme participation for the adult participants, their families and their communities. Alongside the literacy and learning gains identified, participants experienced positive social and relational events and changes, affirmation and strengthening of their culturally based values, and affirmation and building of positive identity. Systemically, interconnectedness of programme effects with participants’ personal histories and circumstances; the flow on of effects to other aspects of people’s lives and to their families and communities; and the personalised character of the pathways people journeyed towards enhanced wellbeing were identified. Concluding statement: This paper demonstrates the critical contribution of community psychology to a fuller understanding of family-focused educational programme outcomes than has been previously attainable, the meaning of these broader outcomes to people in their lives, and their role in wellbeing and social justice.

Keywords: community psychology, ecological theory, family literacy education, flow on effects, holistic wellbeing

Procedia PDF Downloads 256
42 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application

Authors: A. Mihoc, K. Cater

Abstract:

On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.

Keywords: compass error, GPS, maritime navigation, mobile augmented reality

Procedia PDF Downloads 330
41 Fold and Thrust Belts Seismic Imaging and Interpretation

Authors: Sunjay

Abstract:

Plate tectonics is of very great significance as it represents the spatial relationships of volcanic rock suites at plate margins, the distribution in space and time of the conditions of different metamorphic facies, the scheme of deformation in mountain belts, or orogens, and the association of different types of economic deposit. Orogenic belts are characterized by extensive thrust faulting, movements along large strike-slip fault zones, and extensional deformation that occur deep within continental interiors. Within oceanic areas there also are regions of crustal extension and accretion in the backarc basins that are located on the landward sides of many destructive plate margins.Collisional orogens develop where a continent or island arc collides with a continental margin as a result of subduction. collisional and noncollisional orogens can be explained by differences in the strength and rheology of the continental lithosphere and by processes that influence these properties during orogenesis.Seismic Imaging Difficulties-In triangle zones, several factors reduce the effectiveness of seismic methods. The topography in the central part of the triangle zone is usually rugged and is associated with near-surface velocity inversions which degrade the quality of the seismic image. These characteristics lead to low signal-to-noise ratio, inadequate penetration of energy through overburden, poor geophone coupling with the surface and wave scattering. Depth Seismic Imaging Techniques-Seismic processing relates to the process of altering the seismic data to suppress noise, enhancing the desired signal (higher signal-to-noise ratio) and migrating seismic events to their appropriate location in space and depth. Processing steps generally include analysis of velocities, static corrections, moveout corrections, stacking and migration. Exploration seismology Bow-tie effect -Shadow Zones-areas with no reflections (dead areas). These are called shadow zones and are common in the vicinity of faults and other discontinuous areas in the subsurface. Shadow zones result when energy from a reflector is focused on receivers that produce other traces. As a result, reflectors are not shown in their true positions. Subsurface Discontinuities-Diffractions occur at discontinuities in the subsurface such as faults and velocity discontinuities (as at “bright spot” terminations). Bow-tie effect caused by the two deep-seated synclines. Seismic imaging of thrust faults and structural damage-deepwater thrust belts, Imaging deformation in submarine thrust belts using seismic attributes,Imaging thrust and fault zones using 3D seismic image processing techniques, Balanced structural cross sections seismic interpretation pitfalls checking, The seismic pitfalls can originate due to any or all of the limitations of data acquisition, processing, interpretation of the subsurface geology,Pitfalls and limitations in seismic attribute interpretation of tectonic features, Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Carbon capture and geological storage leakage surveillance because fault behave as a seal or a conduit for hydrocarbon transportation to a trap,etc.

Keywords: tectonics, seismic imaging, fold and thrust belts, seismic interpretation

Procedia PDF Downloads 70
40 Primary-Color Emitting Photon Energy Storage Nanophosphors for Developing High Contrast Latent Fingerprints

Authors: G. Swati, D. Haranath

Abstract:

Commercially available long afterglow /persistent phosphors are proprietary materials and hence the exact composition and phase responsible for their luminescent characteristics such as initial intensity and afterglow luminescence time are not known. Further to generate various emission colors, commercially available persistence phosphors are physically blended with fluorescent organic dyes such as rodhamine, kiton and methylene blue etc. Blending phosphors with organic dyes results into complete color coverage in visible spectra, however with time, such phosphors undergo thermal and photo-bleaching. This results in the loss of their true emission color. Hence, the current work is dedicated studies on inorganic based thermally and chemically stable primary color emitting nanophosphors namely SrAl2O4:Eu2+, Dy3+, (CaZn)TiO3:Pr3+, and Sr2MgSi2O7:Eu2+, Dy3+. SrAl2O4: Eu2+, Dy3+ phosphor exhibits a strong excitation in UV and visible region (280-470 nm) with a broad emission peak centered at 514 nm is the characteristic emission of parity allowed 4f65d1→4f7 transitions of Eu2+ (8S7/2→2D5/2). Sunlight excitable Sr2MgSi2O7:Eu2+,Dy3+ nanophosphors emits blue color (464 nm) with Commercial international de I’Eclairage (CIE) coordinates to be (0.15, 0.13) with a color purity of 74 % with afterglow time of > 5 hours for dark adapted human eyes. (CaZn)TiO3:Pr3+ phosphor system possess high color purity (98%) which emits intense, stable and narrow red emission at 612 nm due intra 4f transitions (1D2 → 3H4) with afterglow time of 0.5 hour. Unusual property of persistence luminescence of these nanophoshphors supersedes background effects without losing sensitive information these nanophosphors offer several advantages of visible light excitation, negligible substrate interference, high contrast bifurcation of ridge pattern, non-toxic nature revealing finger ridge details of the fingerprints. Both level 1 and level 2 features from a fingerprint can be studied which are useful for used classification, indexing, comparison and personal identification. facile methodology to extract high contrast fingerprints on non-porous and porous substrates using a chemically inert, visible light excitable, and nanosized phosphorescent label in the dark has been presented. The chemistry of non-covalent physisorption interaction between the long afterglow phosphor powder and sweat residue in fingerprints has been discussed in detail. Real-time fingerprint development on porous and non-porous substrates has also been performed. To conclude, apart from conventional dark vision applications, as prepared primary color emitting afterglow phosphors are potentional candidate for developing high contrast latent fingerprints.

Keywords: fingerprints, luminescence, persistent phosphors, rare earth

Procedia PDF Downloads 222
39 Subcutan Isosulfan Blue Administration May Interfere with Pulse Oximetry

Authors: Esra Yuksel, Dilek Duman, Levent Yeniay, Sezgin Ulukaya

Abstract:

Sentinel lymph node biopsy (SLNB) is a minimal invasive technique with lower morbidity in axillary staging of breast cancer. Isosulfan blue stain is frequently used in SLNB and regarded as safe. The present case report aimed to report severe decrement in SpO2 following isosulfan blue administration, as well as skin and urine signs and inconsistency with clinical picture in a 67-year-old ,77 kg, ASA II female case that underwent SLNB under general anesthesia. Ten minutes after subcutaneous administration of 10 ml 1% isosulfan blue by the surgeons into the patient, who were hemodynamically stable, SpO2 first reduced to 87% from 99%, and then to 75% in minutes despite 100% oxygen support. Meanwhile, blood pressure and EtCO2 monitoring was unremarkable. After specifying that anesthesia device worked normally, airway pressure did not increase and the endotracheal tube has been placed accurately, the blood sample was taken from the patient for arterial gas analysis. A severe increase was thought in MetHb concentration since SpO2 persisted to be 75% although the concentration of inspired oxygen was 100%, and solution of 2500 mg ascorbic acid in 500 ml 5% Dextrose was given to the patient via intravenous route until the results of arterial blood gas were obtained. However, arterial blood gas results were as follows: pH: 7.54, PaCO2: 23.3 mmHg, PaO2: 281 mmHg, SaO2: %99, and MetHb: %2.7. Biochemical analysis revealed a blood MetHb concentration of 2%.However, since arterial blood gas parameters were good, hemodynamics of the patient was stable and methemoglobin concentration was not so high, the patient was extubated after surgery when she was relaxed, cooperated and had adequate respiration. Despite the absence of respiratory or neurological distress, SpO2 value was increased only up to 85% within 2 hours with 5 L/min oxygen support via face mask in the surgery room as the patient was extubated. At that time, the skin of particularly the upper part of her body has turned into blue, more remarkable on the face. The color of plasma of the blood taken from the patient for biochemical analysis was blue. The color of urine coming throughout the urinary catheter placed in intensive care unit was also blue. Twelve hours after 5 L/min. oxygen inhalation via a mask, the SpO2 reached to 90%. During monitoring in intensive care unit on the postoperative 1st day, facial color and urine color of the patient was still blue, SpO2 was 92%, and arterial blood gas levels were as follows: pH: 7.44, PaO2: 76.1 mmHg, PaCO2: 38.2 mmHg, SaO2: 99%, and MetHb 1%. During monitoring in clinic on the postoperative 2nd day, SpO2 was 95% without oxygen support and her facial and urine color turned into normal. The patient was discharged on the 3rd day without any problem.In conclusion, SLNB is a less invasive alternative to axillary dissection. However, false pulse oximeter reading due to pigment interference is a rare complication of this procedure. Arterial blood gas analysis should be used to confirm any fall in SpO2 reading during monitoring.

Keywords: isosulfan blue, pulse oximetry, SLNB, methemoglobinemia

Procedia PDF Downloads 315
38 An Exploratory Study of Changing Organisational Practices of Third-Sector Organisations in Mandated Corporate Social Responsibility in India

Authors: Avadh Bihari

Abstract:

Corporate social responsibility (CSR) has become a global parameter to define corporates' ethical and responsible behaviour. It was a voluntary practice in India till 2013, driven by various guidelines, which has become a mandate since 2014 under the Companies Act, 2013. This has compelled the corporates to redesign their CSR strategies by bringing in structures, planning, accountability, and transparency in their processes with a mandate to 'comply or explain'. Based on the author's M.Phil. dissertation, this paper presents the changes in organisational practices and institutional mechanisms of third-sector organisations (TSOs) with the theoretical frameworks of institutionalism and co-optation. It became an interesting case as India is the only country to have a law on CSR, which is not only mandating the reporting but the spending too. The space of CSR in India is changing rapidly and affecting multiple institutions, in the context of the changing roles of the state, market, and TSOs. Several factors such as stringent regulation on foreign funding, mandatory CSR pushing corporates to look out for NGOs, and dependency of Indian NGOs on CSR funds have come to the fore almost simultaneously, which made it an important area of study. Further, the paper aims at addressing the gap in the literature on the effects of mandated CSR on the functioning of TSOs through the empirical and theoretical findings of this study. The author had adopted an interpretivist position in this study to explore changes in organisational practices from the participants' experiences. Data were collected through in-depth interviews with five corporate officials, eleven officials from six TSOs, and two academicians, located at Mumbai and Delhi, India. The findings of this study show the legislation has institutionalised CSR, and TSOs get co-opted in the process of implementing mandated CSR. Seventy percent of the corporates implement their CSR projects through TSOs in India; this has affected the organisational practices of TSOs to a large extent. They are compelled to recruit expert workforce, create new departments for monitoring & evaluation, communications, and adopt management practices of project implementation from corporates. These are attempts to institutionalise the TSOs so that they can produce calculated results as demanded by corporates. In this process, TSOs get co-opted in a struggle to secure funds and lose their autonomy. The normative, coercive, and mimetic isomorphisms of institutionalism come into play as corporates are mandated to take up CSR, thereby influencing the organisational practices of TSOs. These results suggest that corporates and TSOs require an understanding of each other's work culture to develop mutual respect and work towards the goal of sustainable development of the communities. Further, TSOs need to retain their autonomy and understanding of ground realities without which they become an extension of the corporate-funder. For a successful CSR project, engagement beyond funding is required from corporate, through their involvement and not interference. CSR-led community development can be structured by management practices to an extent, but cannot overshadow the knowledge and experience of TSOs.

Keywords: corporate social responsibility, institutionalism, organisational practices, third-sector organisations

Procedia PDF Downloads 116