Search results for: team dynamics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4232

Search results for: team dynamics

752 Numerical Simulation of Convective and Transport Processes in the Nocturnal Atmospheric Surface Layer

Authors: K. R. Sreenivas, Shaurya Kaushal

Abstract:

After sunset, under calm & clear-sky nocturnal conditions, the air layer near the surface containing aerosols cools through radiative processes to the upper atmosphere. Due to this cooling, surface air-layer temperature can fall 2-6 degrees C lower than the ground-surface temperature. This unstable convection layer, on the top, is capped by a stable inversion-boundary layer. Radiative divergence, along with the convection within the surface layer, governs the vertical transport of heat and moisture. Micro-physics in this layer have implications for the occurrence and growth of the fog layer. This particular configuration, featuring a convective mixed layer beneath a stably stratified inversion layer, exemplifies a classic case of penetrative convection. In this study, we conduct numerical simulations of the penetrative convection phenomenon within the nocturnal atmospheric surface layer and elucidate its relevance to the dynamics of fog layers. We employ field and laboratory measurements of aerosol number density to model the strength of the radiative cooling. Our analysis encompasses horizontally averaged, vertical profiles of temperature, density, and heat flux. The energetic incursion of the air from the mixed layer into the stable inversion layer across the interface results in entrainment and the growth of the mixed layer, modeling of which is the key focus of our investigation. In our research, we ascertain the appropriate length scale to employ in the Richardson number correlation, which allows us to estimate the entrainment rate and model the growth of the mixed layer. Our analysis of the mixed layer and the entrainment zone reveals a close alignment with previously reported laboratory experiments on penetrative convection. Additionally, we demonstrate how aerosol number density influences the growth or decay of the mixed layer. Furthermore, our study suggests that the presence of fog near the ground surface can induce extensive vertical mixing, a phenomenon observed in field experiments.

Keywords: inversion layer, penetrative convection, radiative cooling, fog occurrence

Procedia PDF Downloads 72
751 Comprehensive Profiling and Characterization of Untargeted Extracellular Metabolites in Fermentation Processes: Insights and Advances in Analysis and Identification

Authors: Marianna Ciaccia, Gennaro Agrimi, Isabella Pisano, Maurizio Bettiga, Silvia Rapacioli, Giulia Mensa, Monica Marzagalli

Abstract:

Objective: Untargeted metabolomic analysis of extracellular metabolites is a powerful approach that focuses on comprehensively profiling in the extracellular space. In this study, we applied extracellular metabolomic analysis to investigate the metabolism of two probiotic microorganisms with health benefits that extend far beyond the digestive tract and the immune system. Methods: Analytical techniques employed in extracellular metabolomic analysis encompass various technologies, including mass spectrometry (MS), which enables the identification of metabolites present in the fermentation media, as well as the comparison of metabolic profiles under different experimental conditions. Multivariate statistical analysis techniques like principal component analysis (PCA) or partial least squares-discriminant analysis (PLS-DA) play a crucial role in uncovering metabolic signatures and understanding the dynamics of metabolic networks. Results: Different types of supernatants from fermentation processes, such as dairy-free, not dairy-free media and media with no cells or pasteurized, were subjected to metabolite profiling, which contained a complex mixture of metabolites, including substrates, intermediates, and end-products. This profiling provided insights into the metabolic activity of the microorganisms. The integration of advanced software tools has facilitated the identification and characterization of metabolites in different fermentation conditions and microorganism strains. Conclusions: In conclusion, untargeted extracellular metabolomic analysis, combined with software tools, allowed the study of the metabolites consumed and produced during the fermentation processes of probiotic microorganisms. Ongoing advancements in data analysis methods will further enhance the application of extracellular metabolomic analysis in fermentation research, leading to improved bioproduction and the advancement of sustainable manufacturing processes.

Keywords: biotechnology, metabolomics, lactic bacteria, probiotics, postbiotics

Procedia PDF Downloads 72
750 Implementation of Active Recovery at Immediate, 12 and 24 Hours Post-Training in Young Soccer Players

Authors: C. Villamizar, M. Serrato

Abstract:

In the pursuit of athletic performance, the role of physical training which is determined by a number of charges or taxes on physiological stress and musculoskeletal systems of the human body generated by the intensity and duration is fundamental. Given the physical demands of these activities both training and competitive must take into account the optimal relationship with a straining process recovery post favoring the process of overcompensation which aims to facilitate the return and rising energy potential and protein synthesis also of different tissues. Allowing muscle function returns to baseline or pre-exercise states. If this recovery process is not performed or is not allowed in a proper way, will result in an increased state of fatigue. Active recovery, is one of the strategies implemented in the sport for a return to pre-exercise physiological states. However, there are some adverse assumptions regarding the negative effects, as is the possibility of increasing the degradation of muscle glycogen and thus delaying the synthesis thereof. For them, it is necessary to investigate what would be the effects generated application made at different times after the effort. The aim of this study was to determine the effects of active recovery post effort made at three different times: immediately, at 12 and 24 hours on biochemical markers creatine kinase in youth soccer player’s categories. A randomized controlled trial with allocation to three groups was performed: A. active recovery immediately after the effort; B. active recovery performed at 12 hours after the effort; C. active recovery made at 24 hours after the effort. This study included 27 subjects belonging to a Colombian soccer team of the second division. Vital signs, weight, height, BMI, the percentage of muscle mass, fat mass percentage, personal medical history, and family were valued. The velocity, explosive force and Creatin Kinase (CK) in blood were tested before and after interventions. SAFT 90 protocol (Soccer Field specific Aerobic Test) was applied to participants for generating fatigue. CK samples were taken one hour before the application of the fatigue test, one hour after the fatigue protocol and 48 of the initial CK sample. Mean age was 18.5 ± 1.1 years old. Improvements in jumping and speed recovery the 3 groups (p < 0.05), but no statistically significant differences between groups was observed after recuperation. In all participants, there was a significant increment of CK when applied SAFT 90 in all the groups (median 103.1-111.1). The CK measurement after 48 hours reflects a recovery in all groups, however the group C, a decline below baseline levels of -55.5 (-96.3 /-20.4) which is a significant find. Other research has shown that CK does not return quickly to their baseline, but our study shows that active recovery favors the clearance of CK and also to perform recovery 24 hours after the effort generates higher clearance of this biomarker.

Keywords: active recuperation, creatine phosphokinase, post training, young soccer players

Procedia PDF Downloads 161
749 Sensory Interventions for Dementia: A Review

Authors: Leigh G. Hayden, Susan E. Shepley, Cristina Passarelli, William Tingo

Abstract:

Introduction: Sensory interventions are popular therapeutic and recreational approaches for people living with all stages of dementia. However, it is unknown which sensory interventions are used to achieve which outcomes across all subtypes of dementia. Methods: To address this gap, we conducted a scoping review of sensory interventions for people living with dementia. We conducted a search of the literature for any article published in English from 1 January 1990 to 1 June 2019, on any sensory or multisensory intervention targeted to people living with any kind of dementia, which reported on patient health outcomes. We did not include complex interventions where only a small aspect was related to sensory stimulation. We searched the databases Medline, CINHAL, and Psych Articles using our institutional discovery layer. We conducted all screening in duplicate to reduce Type 1 and Type 2 errors. The data from all included papers were extracted by one team member, and audited by another, to ensure consistency of extraction and completeness of data. Results: Our initial search captured 7654 articles, and the removal of duplicates (n=5329), those that didn’t pass title and abstract screening (n=1840) and those that didn’t pass full-text screening (n=281) resulted in 174 articles included. The countries with the highest publication in this area were the United States (n=59), the United Kingdom (n=26) and Australia (n=15). The most common type of interventions were music therapy (n=36), multisensory rooms (n=27) and multisensory therapies (n=25). Seven articles were published in the 1990’s, 55 in the 2000’s, and the remainder since 2010 (n=112). Discussion: Multisensory rooms have been present in the literature since the early 1990’s. However, more recently, nature/garden therapy, art therapy, and light therapy have emerged since 2008 in the literature, an indication of the increasingly diverse scholarship in the area. The least popular type of intervention is a traditional food intervention. Taste as a sensory intervention is generally avoided for safety reasons, however it shows potential for increasing quality of life. Agitation, behavior, and mood are common outcomes for all sensory interventions. However, light therapy commonly targets sleep. The majority (n=110) of studies have very small sample sizes (n=20 or less), an indicator of the lack of robust data in the field. Additional small-scale studies of the known sensory interventions will likely do little to advance the field. However, there is a need for multi-armed studies which directly compare sensory interventions, and more studies which investigate the use of layering sensory interventions (for example, adding an aromatherapy component to a lighting intervention). In addition, large scale studies which enroll people at early stages of dementia will help us better understand the potential of sensory and multisensory interventions to slow the progression of the disease.

Keywords: sensory interventions, dementia, scoping review

Procedia PDF Downloads 136
748 Identification of Vehicle Dynamic Parameters by Using Optimized Exciting Trajectory on 3- DOF Parallel Manipulator

Authors: Di Yao, Gunther Prokop, Kay Buttner

Abstract:

Dynamic parameters, including the center of gravity, mass and inertia moments of vehicle, play an essential role in vehicle simulation, collision test and real-time control of vehicle active systems. To identify the important vehicle dynamic parameters, a systematic parameter identification procedure is studied in this work. In the first step of the procedure, a conceptual parallel manipulator (virtual test rig), which possesses three rotational degrees-of-freedom, is firstly proposed. To realize kinematic characteristics of the conceptual parallel manipulator, the kinematic analysis consists of inverse kinematic and singularity architecture is carried out. Based on the Euler's rotation equations for rigid body dynamics, the dynamic model of parallel manipulator and derivation of measurement matrix for parameter identification are presented subsequently. In order to reduce the sensitivity of parameter identification to measurement noise and other unexpected disturbances, a parameter optimization process of searching for optimal exciting trajectory of parallel manipulator is conducted in the following section. For this purpose, the 321-Euler-angles defined by parameterized finite-Fourier-series are primarily used to describe the general exciting trajectory of parallel manipulator. To minimize the condition number of measurement matrix for achieving better parameter identification accuracy, the unknown coefficients of parameterized finite-Fourier-series are estimated by employing an iterative algorithm based on MATLAB®. Meanwhile, the iterative algorithm will ensure the parallel manipulator still keeps in an achievable working status during the execution of optimal exciting trajectory. It is showed that the proposed procedure and methods in this work can effectively identify the vehicle dynamic parameters and could be an important application of parallel manipulator in the fields of parameter identification and test rig development.

Keywords: parameter identification, parallel manipulator, singularity architecture, dynamic modelling, exciting trajectory

Procedia PDF Downloads 267
747 Optimization of Titanium Leaching Process Using Experimental Design

Authors: Arash Rafiei, Carroll Moore

Abstract:

Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.

Keywords: titanium leaching, optimization, experimental design, performance analysis

Procedia PDF Downloads 375
746 Palliative Care Referral Behavior Among Nurse Practitioners in Hospital Medicine

Authors: Sharon Jackson White

Abstract:

Purpose: Nurse practitioners (NPs) practicing within hospital medicine play a significant role in caring for patients who might benefit from palliative care (PC) services. Using the Theory of Planned Behavior, the purpose of this study was to examine the relationships among facilitators to referral, barriers to referral, self-efficacy with end-of-life discussions, history of referral, and referring to PC among NPs in hospital medicine. Hypotheses: 1) Perceived facilitators to referral will be associated with a higher history of referral and a higher number of referrals to PC. 2) Perceived barriers to referral will be associated with a lower history of referral and a lower number of referrals to PC. 3) Increased self-efficacy with end-of-life discussions will be associated with a higher history of referral and a higher number of referrals to PC. 4) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the history of referral to PC. 5) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the number of referrals to PC. Significance: Previous studies of referring patients to PC within the hospital setting care have focused on physician practices. Identifying factors that influence NPs referring hospitalized patients to PC is essential to ensure that patients have access to these important services. This study incorporates the SNRS mission of advancing nursing research through the dissemination of research findings and the promotion of nursing science. Methods: A cross-sectional, predictive correlational study was conducted. History of referral to PC, facilitators to referring to PC, barriers to referring to PC, self-efficacy in end-of-life discussions, and referral to PC were measured using the PC referral case study survey, facilitators and barriers to PC referral survey, and self-assessment with end-of-life discussions survey. Data were analyzed descriptively and with Pearson’s Correlation, Spearman’s Rho, point-biserial correlation, multiple regression, logistic regression, Chi-Square test, and the Mann-Whitney U test. Results: Only one facilitator (PC team being helpful with establishing goals of care) was significantly associated with referral to PC. Three variables were statistically significant in relation to the history of referring to PC: “Inclined to refer: PC can help decrease the length of stay in hospital”, “Most inclined to refer: Patients with serious illnesses and/or poor prognoses”, and “Giving bad news to a patient or family member”. No predictor variables contributed a significant variance in the number of referrals to PC for all three case studies. There were no statistically significant results showing a relationship between the history of referral and referral to PC. All five hypotheses were partially supported. Discussion: Findings from this study emphasize the need for further research on NPs who work in hospital settings and what factors influence their behaviors of referring to PC. Since there is an increase in NPs practicing within hospital settings, future studies should use a larger sample size and incorporate hospital medicine NPs and other types of NPs that work in hospitals.

Keywords: palliative care, nurse practitioners, hospital medicine, referral

Procedia PDF Downloads 75
745 A History of Taiwan’s Secret Nuclear Program

Authors: Hsiao-ting Lin

Abstract:

This paper analyzes the history of Taiwan’s secret program to develop its nuclear weapons during the Cold War. In July 1971, US President Richard Nixon shocked the world when he announced that his national security adviser Henry Kissinger had made a secret trip to China and that he himself had accepted an invitation to travel to Beijing. This huge breakthrough in the US-PRC relationship was followed by Taipei’s loss of political legitimacy and international credibility as a result of its UN debacle in the fall that year. Confronted with the Nixon White House’s opening to the PRC, leaders in Taiwan felt being betrayed and abandoned, and they were obliged to take countermeasures for the sake of national interest and regime survival. Taipei’s endeavor to create an effective nuclear program, including the possible development of nuclear weapons capabilities, fully demonstrates the government’s resolution to pursue its own national policy, even if such a policy was guaranteed to undermine its relations with the United States. With hindsight, Taiwan’s attempt to develop its own nuclear weapons did not succeed in sabotaging the warming of US-PRC relations. Worse, it was forced to come to a full stop when, in early 1988, the US government pressured Taipei to close related facilities and programs on the island. However, Taiwan’s abortive attempt to develop its nuclear capability did influence Washington’s and Beijing’s handling of their new relationship. There did develop recognition of a common American and PRC interest in avoiding a nuclearized Taiwan. From this perspective, Beijing’s interests would best be served by allowing the island to remain under loose and relatively benign American influence. As for the top leaders on Taiwan, such a policy choice demonstrated how they perceived the shifting dynamics of international politics in the 1960s and 1970s and how they struggled to break free and pursue their own independent national policy within the rigid framework of the US-Taiwan alliance during the Cold War.

Keywords: taiwan, richard nixon, nuclear program, chiang Kai-shek, chiang ching-kuo

Procedia PDF Downloads 133
744 Embedding the Dimensions of Sustainability into City Information Modelling

Authors: Ali M. Al-Shaery

Abstract:

The purpose of this paper is to address the functions of sustainability dimensions in city information modelling and to present the required sustainability criteria that support establishing a sustainable planning framework for enhancing existing cities and developing future smart cities. The paper is divided into two sections. The first section is based on the examination of a wide and extensive array of cross-disciplinary literature in the last decade and a half to conceptualize the terms ‘sustainable’ and ‘smart city,' and map their associated criteria to city information modelling. The second section is based on analyzing two approaches relating to city information modelling, namely statistical and dynamic approaches, and their suitability in the development of cities’ action plans. The paper argues that the use of statistical approaches to embedding sustainability dimensions in city information modelling have limited value. Despite the popularity of such approaches in addressing other dimensions like utility and service management in development and action plans of the world cities, these approaches are unable to address the dynamics across various city sectors with regards to economic, environmental and social criteria. The paper suggests an integrative dynamic and cross-disciplinary planning approach to embedding sustainability dimensions in city information modelling frameworks. Such an approach will pave the way towards optimal planning and implementation of priority actions of projects and investments. The approach can be used to achieve three main goals: (1) better development and action plans for world cities (2) serve the development of an integrative dynamic and cross-disciplinary framework that incorporates economic, environmental and social sustainability criteria and (3) address areas that require further attention in the development of future sustainable and smart cities. The paper presents an innovative approach for city information modelling and a well-argued, balanced hierarchy of sustainability criteria that can contribute to an area of research which is still in its infancy in terms of development and management.

Keywords: information modelling, smart city, sustainable city, sustainability dimensions, sustainability criteria, city development planning

Procedia PDF Downloads 328
743 Co-Creation of an Entrepreneurship Living Learning Community: A Case Study of Interprofessional Collaboration

Authors: Palak Sadhwani, Susie Pryor

Abstract:

This paper investigates interprofessional collaboration (IPC) in the context of entrepreneurship education. Collaboration has been found to enhance problem solving, leverage expertise, improve resource allocation, and create organizational efficiencies. However, research suggests that successful collaboration is hampered by individual and organizational characteristics. IPC occurs when two or more professionals work together to solve a problem or achieve a common objective. The necessity for this form of collaboration is particularly prevalent in cross-disciplinary fields. In this study, we utilize social exchange theory (SET) to examine IPC in the context of an entrepreneurship living learning community (LLC) at a large university in the Western United States. Specifically, we explore these research questions: How are rules or norms established that govern the collaboration process? How are resources valued and distributed? How are relationships developed and managed among and between parties? LLCs are defined as groups of students who live together in on-campus housing and share similar academic or special interests. In 2007, the Association of American Colleges and Universities named living communities a high impact practice (HIP) because of their capacity to enhance and give coherence to undergraduate education. The entrepreneurship LLC in this study was designed to offer first year college students the opportunity to live and learn with like-minded students from diverse backgrounds. While the university offers other LLC environments, the target residents for this LLC are less easily identified and are less apparently homogenous than residents of other LLCs on campus (e.g., Black Scholars, LatinX, Women in Science and Education), creating unique challenges. The LLC is a collaboration between the university’s College of Business & Public Administration and the Department of Housing and Residential Education (DHRE). Both parties are contributing staff, technology, living and learning spaces, and other student resources. This paper reports the results an ethnographic case study which chronicles the start-up challenges associated with the co-creation of the LLC. SET provides a general framework for examining how resources are valued and exchanged. In this study, SET offers insights into the processes through which parties negotiate tensions resulting from approaching this shared project from very different perspectives and cultures in a novel project environment. These tensions occur due to a variety of factors, including team formation and management, allocation of resources, and differing output expectations. The results are useful to both scholars and practitioners of entrepreneurship education and organizational management. They suggest probably points of conflict and potential paths towards reconciliation.

Keywords: case study, ethnography, interprofessional collaboration, social exchange theory

Procedia PDF Downloads 141
742 Short-Term Impact of a Return to Conventional Tillage on Soil Microbial Attributes

Authors: Promil Mehra, Nanthi Bolan, Jack Desbiolles, Risha Gupta

Abstract:

Agricultural practices affect the soil physical and chemical properties, which in turn influence the soil microorganisms as a function of the soil biological environment. On the return to conventional tillage (CT) from continuing no-till (NT) cropping system, a very little information is available from the impact caused by the intermittent tillage on the soil biochemical properties from a short-term (2-year) study period. Therefore, the contribution made by different microorganisms (fungal, bacteria) was also investigated in order to find out the effective changes in the soil microbial activity under a South Australian dryland faring system. This study was conducted to understand the impact of microbial dynamics on the soil organic carbon (SOC) under NT and CT systems when treated with different levels of mulching (0, 2.5 and 5 t/ha). Our results demonstrated that from the incubation experiment the cumulative CO2 emitted from CT system was 34.5% higher than NT system. Relatively, the respiration from surface layer (0-10 cm) was significantly (P<0.05) higher by 8.5% and 15.8 from CT; 8% and 18.9% from NT system w.r.t 10-20 and 20-30 cm respectively. Further, the dehydrogenase enzyme activity (DHA) and microbial biomass carbon (MBC) were both significantly lower (P<0.05) under CT, i.e., 7.4%, 7.2%, 6.0% (DHA) and 19.7%, 15.7%, 4% (MBC) across the different mulching levels (0, 2.5, 5 t/ha) respectively. In general, it was found that from both the tillage system the enzyme activity and MBC decreased with the increase in depth (0-10, 10-20 and 20-30 cm) and with the increase in mulching rate (0, 2.5 and 5 t/ha). From the perspective of microbial stress, there was 28.6% higher stress under CT system compared to NT system. Whereas, the microbial activity of different microorganisms like fungal and bacterial activities were determined by substrate-induced inhibition respiration using antibiotics like cycloheximide (16 mg/gm of soil) and streptomycin sulphate (14 mg/gm of soil), by trapping the CO2 using an alkali (0.5 M NaOH) solution. The microbial activities were confirmed through platting technique, where it was that found bacterial activities were 46.2% and 38.9% higher than fungal activity under CT and NT system. In conclusion, it was expected that changes in the relative abundance and activity of different microorganisms (bacteria and fungi) under different tillage systems could significantly affect the C cycling and storage due to its unique structures and differential interactions with the soil physical properties.

Keywords: tillage, soil respiration, MBC, fungal-bacterial activity

Procedia PDF Downloads 263
741 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis

Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger

Abstract:

In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.

Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis

Procedia PDF Downloads 333
740 Portuguese Guitar Strings Characterization and Comparison

Authors: P. Serrão, E. Costa, A. Ribeiro, V. Infante

Abstract:

The characteristic sonority of the Portuguese guitar is in great part what makes Fado so distinguishable from other traditional song styles. The Portuguese guitar is a pear-shaped plucked chordophone with six courses of double strings. This study compares the two types of plain strings available for Portuguese guitar and used by the musicians. One is stainless steel spring wire, the other is high carbon spring steel (music wire). Some musicians mention noticeable differences in sound quality between these two string materials, such as a little more brightness and sustain in the steel strings. Experimental tests were performed to characterize string tension at pitch; mechanical strength and tuning stability using the universal testing machine; dimensional control and chemical composition analysis using the scanning electron microscope. The string dynamical behaviour characterization experiments, including frequency response, inharmonicity, transient response, damping phenomena and were made in a monochord test set-up designed and built in-house. Damping factor was determined for the fundamental frequency. As musicians are able to detect very small damping differences, an accurate a characterization of the damping phenomena for all harmonics was necessary. With that purpose, another improved monochord was set and a new system identification methodology applied. Due to the complexity of this task several adjustments were necessary until obtaining good experimental data. In a few cases, dynamical tests were repeated to detect any evolution in damping parameters after break-in period when according to players experience a new string sounds gradually less dull until reaching the typically brilliant timbre. Finally, each set of strings was played on one guitar by a distinguished player and recorded. The recordings which include individual notes, scales, chords and a study piece, will be analysed to potentially characterize timbre variations.

Keywords: damping factor, music wire, portuguese guitar, string dynamics

Procedia PDF Downloads 553
739 Development of Doctoral Education in Armenia (1990 - 2023)

Authors: Atom Mkhitaryan, Astghik Avetisyan

Abstract:

We analyze the developments of doctoral education in Armenia since 1990 and the management process. Education and training of highly qualified personnel are increasingly seen as a fundamental platform that ensures the development of the state. Reforming the national institute for doctoral studies (aspirantura) is aimed at improving the quality of human resources in science, optimizing research topics in accordance with the priority areas of development of science and technology, increasing publication and innovative activities, bringing national science and research closer to the world level and achieving international recognition. We present a number of defended dissertations in Armenia during the last 30 years, the dynamics and the main trends of the development of the academic degree awarding system. We discuss the possible impact of reforming the system of training and certification of highly qualified personnel on the organization of third–level doctoral education (doctoral schools) and specialized / dissertation councils in Armenia. The results of the SWOT analysis of doctoral education and academic degree awarding processes in Armenia are shown. The article presents the main activities and projects aimed at using the advantages and strong points of the National Academy network in order to improve the quality of doctoral education and training. The paper explores the mechanisms of organizational, methodological and infrastructural support for research and innovation activities of doctoral students and young scientists. There are also suggested approaches to the organization of strong networking between research institutes and foreign universities for training and certification of highly qualified personnel. The authors define the role of ISEC in the management of doctoral studies and the establishment of a competitive third-level education for the sphere of research and development in Armenia.

Keywords: doctoral studies, academic degree, PhD, certification, highly qualified personnel, dissertation, research and development, innovation, networking, management of doctoral school

Procedia PDF Downloads 65
738 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 130
737 Study Variation of Blade Angle on the Performance of the Undershot Waterwheel on the Pico Scale

Authors: Warjito, Kevin Geraldo, Budiarso, Muhammad Mizan, Rafi Adhi Pranata, Farhan Rizqi Syahnakri

Abstract:

According to data from 2021, the number of households in Indonesia that have access to on-grid electricity is claimed to have reached 99.28%, which means that around 0.7% of Indonesia's population (1.95 million people) still have no proper access to electricity and 38.1% of it comes from remote areas in Nusa Tenggara Timur. Remote areas are classified as areas with a small population of 30 to 60 families, have limited infrastructure, have scarce access to electricity and clean water, have a relatively weak economy, are behind in access to technological innovation, and earn a living mostly as farmers or fishermen. These people still need electricity but can’t afford the high cost of electricity from national on-grid sources. To overcome this, it is proposed that a hydroelectric power plant driven by a pico-hydro turbine with an undershot water wheel will be a suitable pico-hydro turbine technology because of the design, materials and installation of the turbine that is believed to be easier (i.e., operational and maintenance) and cheaper (i.e., investment and operating costs) than any other type. The comparative study of the angle of the undershot water wheel blades will be discussed comprehensively. This study will look into the best variation of curved blades on an undershot water wheel that produces maximum hydraulic efficiency. In this study, the blade angles were varied by 180 ̊, 160 ̊, and 140 ̊. Two methods of analysis will be used, which are analytical and numerical methods. The analytical method will be based on calculations of the amount of torque and rotational speed of the turbine, which is used to obtain the input and output power of the turbine. Whereas the numerical method will use the ANSYS application to simulate the flow during the collision with the designed turbine blades. It can be concluded, based on the analytical and numerical methods, that the best angle for the blade is 140 ̊, with an efficiency of 43.52% for the analytical method and 37.15% for the numerical method.

Keywords: pico hydro, undershot waterwheel, blade angle, computational fluid dynamics

Procedia PDF Downloads 78
736 Timely Screening for Palliative Needs in Ambulatory Oncology

Authors: Jaci Mastrandrea

Abstract:

Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening is directly correlated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project is to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline from March 15th, 2022, to April 29th, 2022. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated and evidence-based PC referral criteria. The tool was initially implemented using paper forms and later was integrated into the Epic-Beacon EHR system. Patients were screened by registered nurses on the SLCTC treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher were considered to be a positive screen. Scores of five or higher triggered a PC referral order in the patient’s EHR for the oncologist to review and approve. All patients with a positive screen received an educational handout on the topic of PC, and the EHR was flagged for follow-up. Results: Prior to implementation of the PSCNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the first 100 patient screenings completed within the eight-week data collection period. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting the criteria were flagged in the EHR for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.

Keywords: oncology, palliative care, symptom management, symptom screening, ambulatory oncology, cancer, supportive care

Procedia PDF Downloads 76
735 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States

Authors: Alexis Polanco Jr, Tsai Lu Liu

Abstract:

Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.

Keywords: deaf, hard of hearing, design, guidelines, education, assessment

Procedia PDF Downloads 68
734 Predicting Returns Volatilities and Correlations of Stock Indices Using Multivariate Conditional Autoregressive Range and Return Models

Authors: Shay Kee Tan, Kok Haur Ng, Jennifer So-Kuen Chan

Abstract:

This paper extends the conditional autoregressive range (CARR) model to multivariate CARR (MCARR) model and further to the two-stage MCARR-return model to model and forecast volatilities, correlations and returns of multiple financial assets. The first stage model fits the scaled realised Parkinson volatility measures using individual series and their pairwise sums of indices to the MCARR model to obtain in-sample estimates and forecasts of volatilities for these individual and pairwise sum series. Then covariances are calculated to construct the fitted variance-covariance matrix of returns which are imputed into the stage-two return model to capture the heteroskedasticity of assets’ returns. We investigate different choices of mean functions to describe the volatility dynamics. Empirical applications are based on the Standard and Poor 500, Dow Jones Industrial Average and Dow Jones United States Financial Service Indices. Results show that the stage-one MCARR models using asymmetric mean functions give better in-sample model fits than those based on symmetric mean functions. They also provide better out-of-sample volatility forecasts than those using CARR models based on two robust loss functions with the scaled realised open-to-close volatility measure as the proxy for the unobserved true volatility. We also find that the stage-two return models with constant means and multivariate Student-t errors give better in-sample fits than the Baba, Engle, Kraft, and Kroner type of generalized autoregressive conditional heteroskedasticity (BEKK-GARCH) models. The estimates and forecasts of value-at-risk (VaR) and conditional VaR based on the best MCARR-return models for each asset are provided and tested using Kupiec test to confirm the accuracy of the VaR forecasts.

Keywords: range-based volatility, correlation, multivariate CARR-return model, value-at-risk, conditional value-at-risk

Procedia PDF Downloads 100
733 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor

Authors: D. Ramajo, S. Corzo, M. Nigro

Abstract:

A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.

Keywords: PHWR, CFD, thermo-hydraulic, two-phase flow

Procedia PDF Downloads 469
732 American Sign Language Recognition System

Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba

Abstract:

The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.

Keywords: sign language, computer vision, vision transformer, VGG16, CNN

Procedia PDF Downloads 44
731 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept

Authors: Ahmed El Naggar, Homyan Saleh

Abstract:

Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.

Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy

Procedia PDF Downloads 93
730 The Use of Space Syntax in Urban Transportation Planning and Evaluation: Limits and Potentials

Authors: Chuan Yang, Jing Bie, Yueh-Lung Lin, Zhong Wang

Abstract:

Transportation planning is an academic integration discipline combining research and practice with the aim of mobility and accessibility improvements at both strategic-level policy-making and operational dimensions of practical planning. Transportation planning could build the linkage between traffic and social development goals, for instance, economic benefits and environmental sustainability. The transportation planning analysis and evaluation tend to apply empirical quantitative approaches with the guidance of the fundamental principles, such as efficiency, equity, safety, and sustainability. Space syntax theory has been applied in the spatial distribution of pedestrian movement or vehicle flow analysis, however rare has been written about its application in transportation planning. The correlated relationship between the variables of space syntax analysis and authentic observations have declared that the urban configurations have a significant effect on urban dynamics, for instance, land value, building density, traffic, crime. This research aims to explore the potentials of applying Space Syntax methodology to evaluate urban transportation planning through studying the effects of urban configuration on cities transportation performance. By literature review, this paper aims to discuss the effects that urban configuration with different degrees of integration and accessibility have on three elementary components of transportation planning - transportation efficiency, transportation safety, and economic agglomeration development - via intensifying and stabilising the nature movements generated by the street network. And then the potential and limits of Space Syntax theory to study the performance of urban transportation and transportation planning would be discussed in the paper. In practical terms, this research will help future research explore the effects of urban design on transportation performance, and identify which patterns of urban street networks would allow for most efficient and safe transportation performance with higher economic benefits.

Keywords: transportation planning, space syntax, economic agglomeration, transportation efficiency, transportation safety

Procedia PDF Downloads 198
729 Analyzing Land use change and its impacts on the Urban Environment in a Fast Growing Metropolitan City of Pakistan

Authors: Muhammad Nasar-u-Minallah, Dagmar Haase, Salman Qureshi

Abstract:

In a rapidly growing developing country cities are becoming more urbanized leading to modifications in urban climate. Rapid urbanization, especially unplanned urban land expansion, together with climate change has a profound impact on the urban settlement and urban thermal environment. Cities, particularly Pakistan are facing remarkably environmental issues and uneven development, and thus it is important to strengthen the investigation of urban environmental pressure brought by land-use changes and urbanization. The present study investigated the long term modification of the urban environment by urbanization utilizing Spatio-temporal dynamics of land-use change, urban population data, urban heat islands, monthly maximum, and minimum temperature of thirty years, multi remote sensing imageries, and spectral indices such as Normalized Difference Built-up Index and Normalized Difference Vegetation Index. The results indicate rapid growth in an urban built-up area and a reduction in vegetation cover in the last three decades (1990-2020). A positive correlation between urban heat islands and Normalized Difference Built-up Index, whereas a negative correlation between urban heat islands and the Normalized Difference Vegetation Index clearly shows how urbanization is affecting the local environment. The increase in air and land surface temperature temperatures is dangerous to human comfort. Practical approaches, such as increasing the urban green spaces and proper planning of the cities, have been suggested to help prevent further modification of the urban thermal environment by urbanization. The findings of this work are thus important for multi-sectorial use in the cities of Pakistan. By taking into consideration these results, the urban planners, decision-makers, and local government can make different policies to mitigate the urban land use impacts on the urban thermal environment in Pakistan.

Keywords: land use, urban environment, local climate, Lahore

Procedia PDF Downloads 111
728 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions

Authors: Gaurangi Saxena, Ravindra Saxena

Abstract:

Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.

Keywords: cloud computing, competitive advantage, customer relationship management, grid computing

Procedia PDF Downloads 312
727 Performing Marginality and Contestation of Ethnic Identity: Dynamics of Identity Politics in Assam, India

Authors: Hare Krishna Doley

Abstract:

Drawing upon empirical data, this paper tries to examine how ethnic groups like Ahom, Moran, Motok, and Chutia creates and recreates ethnic boundaries while making claims for recognition as Scheduled Tribes (STs) under the Sixth Schedule of the Constitution of India, in the state of Assam. Underlying such claim is the distinct identity consciousness amongst these groups as they assert themselves originally as tribe drawing upon primordial elements. For them, tribal identity promises social justice and give credence to their claims of indigeneity while preserving their exclusivity within the multifarious society of Assam. Having complex inter-group relationships, these groups under study displays distinct as well as overlapping identities, which demonstrate fluidity of identities across groups while making claims for recognition. In this process, the binary of ‘us’ and ‘them’ are often constructed amongst these groups, which are in turn difficult to grasp as they share common historical linkages. This paper attempts to grapple with such complex relationships the studied groups and their assertion as distinct cultural entities while making ethnic boundaries on the basis of socio-cultural identities. Such claims also involve frequent negotiation with the Sate as well as with other ethnic groups, which further creates strife among indigenous groups for tribal identity. The paper argues that identity consciousnesses amongst groups have persisted since the introduction of resource distribution on ethnic lines; therefore, issues of exclusive ethnic identity in the state of Assam can be contextualised within the colonial and post-colonial politics of redrawing ethnic and spatial boundaries. Narrative of the ethnic leaders who are in the forefront of struggle for ST status revealed that it is not merely to secure preferential treatment, but it also encompasses entitlement to land and their socio-cultural identity as aboriginal. While noting the genesis of struggle by the ethnic associations for ST status, this paper will also delineate the interactions among ethnic groups and how the identity of tribe is being performed by them to be included in the official categories of ST.

Keywords: ethnic, identity, sixth schedule, tribe

Procedia PDF Downloads 202
726 Nationalization of the Social Life in Argentina: Accumulation of Capital, State Intervention, Labor Market, and System of Rights in the Last Decades

Authors: Mauro Cristeche

Abstract:

This work begins with a very simple question: How does the State spend? Argentina is witnessing a process of growing nationalization of social life, so it is necessary to find out the explanations of the phenomenon on the specific dynamic of the capitalist mode of production in Argentina and its transformations in the last decades. Then the new question is: what happened in Argentina that could explain this phenomenon? Since the seventies, the capital growth in Argentina faces deep competitive problems. Until that moment the agrarian wealth had worked as a compensation mechanism, but it began to find its limits. In the meantime, some important demographical and structural changes had happened. The strategy of the capitalist class had to become to seek in the cheapness of the labor force the main source of compensation of its weakness. As a result, a tendency to worsen the living conditions and fragmentation of the working class started to develop, manifested by unemployment, underemployment, and the fall of the purchasing power of the salary as a highlighted fact. As a consequence, it is suggested that the role of the State became stronger and public expenditure increased, as a historical trend, because it has to intervene to face the contradictions and constant growth problems posed by the development of capitalism in Argentina. On the one hand, the State has to guarantee the process of buying the cheapened workforce and at the same time the process of reproduction of the working class. On the other hand, it has to help to reproduce the individual capitals but needs to ‘attack’ them in different ways. This is why the role of the State is said to be the general political representative to the national portion of the total social capital. What will be studied is the dynamic of the intervention of the Argentine State in the context of the particular national process of capital growth, and its dynamics in the last decades. What this paper wants to show are the main general causes that could explain the phenomenon of nationalization of the social life and how it has impacted the life conditions of the working class and the system of rights.

Keywords: Argentina, nationalization, public policies, rights, state

Procedia PDF Downloads 137
725 Single and Combined Effects of Diclofenac and Ibuprofen on Daphnia Magna and Some Phytoplankton Species

Authors: Ramatu I. Sha’aba, Mathias A. Chia, Abdullahi B. Alhassan, Yisa A. Gana, Ibrahim M. Gadzama

Abstract:

Globally, Diclofenac (DLC) and Ibuprofen (IBU) are the most prescribed drugs due to their antipyretic and analgesic properties. They are, however, highly toxic at elevated doses, with the involvement of an already described oxidative stress pathway. As a result, there is rising concern about the ecological fate of analgesics on non-target organisms such as Daphnia magna and Phytoplankton species. Phytoplankton is a crucial component of the aquatic ecosystem that serves as the primary producer at the base of the food chain. However, the increasing presence and levels of micropollutants such as these analgesics can disrupt their community structure, dynamics, and ecosystem functions. This study presents a comprehensive series of the physiology, antioxidant response, immobilization, and risk assessment of Diclofenac and Ibuprofen’s effects on Daphnia magna and the Phytoplankton community using a laboratory approach. The effect of DLC and IBU at 27.16 µg/L and 20.89 µg/L, respectively, for a single exposure and 22.39 µg/L for combined exposure of DLC and IBU for the experimental setup. The antioxidant response increased with increasing levels of stress. The highest stressor to the organism was 1000 µg/L of DLC and 10,000 µg/L of IBU. Peroxidase and glutathione -S-transferase activity was higher for Diclofenac + Ibuprofen. The study showed 60% and 70% immobilization of the organism at 1000 g L-1 of DLC and IBU. The two drugs and their combinations adversely impacted Phytoplankton biomass with increased exposure time. However, combining the drugs resulted in more significant adverse effects on physiological and pigment content parameters. The risk assessment calculation for the risk quotient and toxic unit of the analgesic reveals from this study was RQ Diclofenac = 8.41, TU Diclofenac = 3.68, and RQ Ibuprofen = 718.05 and TU Ibuprofen = 487.70. Hence, these findings demonstrate that the current exposure concentrations of Diclofenac and Ibuprofen can immobilize D. magna. This study shows the dangers of multiple drugs in the aquatic environment because their combinations could have additive effects on the structure and functions of Phytoplankton and are capable of immobilizing D. magna.

Keywords: algae, analgesic drug, daphnia magna, toxicity

Procedia PDF Downloads 79
724 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province

Authors: Yujie Zhao, Jiantao Weng

Abstract:

In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.

Keywords: air infiltration, commercial complex, heat consumption, CFD simulation

Procedia PDF Downloads 134
723 Displacement and Cultural Capital in East Harlem: Use of Community Space in Affordable Artist Housing

Authors: Jun Ha Whang

Abstract:

As New York City weathers a swelling 'affordability crisis' marked by rapid transformation in land development and urban culture, much of the associated scholarly debate has turned to questions of the underlying mechanisms of gentrification. Though classically approached from the point of view of urban planning, increasingly these questions have been addressed with an eye to understanding the role of cultural capital in neighborhood valuation. This paper will examine the construction of an artist-specific affordable housing development in the Spanish Harlem neighborhood of Manhattan in order to identify and discuss several cultural parameters of gentrification. This study’s goal is not to argue that the development in question, named Art space PS 109, straightforwardly increases or decreases the rate of gentrification in Spanish Harlem, but rather to study dynamics present in the construction of Art space PS 109 as a case study considered against the broader landscape of gentrification in New York, particularly with respect to the impact of artist communities on housing supply. In the end, what Art space PS 109 most valuably offers us is a reference point for a comparative analysis of affordable housing strategies currently being pursued within municipal government. Our study of Art space PS 109 has allowed us to examine a microcosm of the city’s response and evaluate its overall strategy accordingly. As a base line, the city must aggressively pursue an affordability strategy specifically suited to the needs of each of its neighborhoods. It must also conduct this in such a way so as not to undermine its own efforts by rendering them susceptible to the exploitative involvement of real estate developers seeking to identify successive waves of trendy neighborhoods. Though Art space PS 109 offers an invaluable resource for the city’s legitimate aim of preserving its artist communities, with such a high inclusion rate of artists from outside of the community the project risks additional displacement, strongly suggesting the need for further study of the implications of sites of cultural capital for neighborhood planning.

Keywords: artist housing, displacement, east Harlem, urban planning

Procedia PDF Downloads 163