Search results for: Electronic Response Systems
630 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142629 Planning a European Policy for Increasing Graduate Population: The Conditions That Count
Authors: Alice Civera, Mattia Cattaneo, Michele Meoli, Stefano Paleari
Abstract:
Despite the fact that more equal access to higher education has been an objective public policy for several decades, little is known about the effectiveness of alternative means for achieving such goal. Indeed, nowadays, high level of graduate population can be observed both in countries with the high and low level of fees, or high and low level of public expenditure in higher education. This paper surveys the extant literature providing some background on the economic concepts of the higher education market, and reviews key determinants of demand and supply. A theoretical model of aggregate demand and supply of higher education is derived, with the aim to facilitate the understanding of the challenges in today’s higher education systems, as well as the opportunities for development. The model is validated on some exemplary case studies describing the different relationship between the level of public investment and levels of graduate population and helps to derive general implications. In addition, using a two-stage least squares model, we build a macroeconomic model of supply and demand for European higher education. The model allows interpreting policies shifting either the supply or the demand for higher education, and allows taking into consideration contextual conditions with the aim of comparing divergent policies under a common framework. Results show that the same policy objective (i.e., increasing graduate population) can be obtained by shifting either the demand function (i.e., by strengthening student aid) or the supply function (i.e., by directly supporting higher education institutions). Under this theoretical perspective, the level of tuition fees is irrelevant, and empirically we can observe high levels of graduate population in both countries with high (i.e., the UK) or low (i.e., Germany) levels of tuition fees. In practice, this model provides a conceptual framework to help better understanding what are the external conditions that need to be considered, when planning a policy for increasing graduate population. Extrapolating a policy from results in different countries, under this perspective, is a poor solution when contingent factors are not addressed. The second implication of this conceptual framework is that policies addressing the supply or the demand function needs to address different contingencies. In other words, a government aiming at increasing graduate population needs to implement complementary policies, designing them according to the side of the market that is interested. For example, a ‘supply-driven’ intervention, through the direct financial support of higher education institutions, needs to address the issue of institutions’ moral hazard, by creating incentives to supply higher education services in efficient conditions. By contrast, a ‘demand-driven’ policy, providing student aids, need to tackle the students’ moral hazard, by creating an incentive to responsible behavior.Keywords: graduates, higher education, higher education policies, tuition fees
Procedia PDF Downloads 166628 Evaluating the Effect of 'Terroir' on Volatile Composition of Red Wines
Authors: María Luisa Gonzalez-SanJose, Mihaela Mihnea, Vicente Gomez-Miguel
Abstract:
The zoning methodology currently recommended by the OIVV as official methodology to carry out viticulture zoning studies and to define and delimit the ‘terroirs’ has been applied in this study. This methodology has been successfully applied on the most significant an important Spanish Oenological D.O. regions, such as Ribera de Duero, Rioja, Rueda and Toro, but also it have been applied around the world in Portugal, different countries of South America, and so on. This is a complex methodology that uses edaphoclimatic data but also other corresponding to vineyards and other soils’ uses The methodology is useful to determine Homogeneous Soil Units (HSU) to different scale depending on the interest of each study, and has been applied from viticulture regions to particular vineyards. It seems that this methodology is an appropriate method to delimit correctly the medium in order to enhance its uses and to obtain the best viticulture and oenological products. The present work is focused on the comparison of volatile composition of wines made from grapes grown in different HSU that coexist in a particular viticulture region of Castile-Lion cited near to Burgos. Three different HSU were selected for this study. They represented around of 50% of the global area of vineyards of the studied region. Five different vineyards on each HSU under study were chosen. To reduce variability factors, other criteria were also considered as grape variety, clone, rootstocks, vineyard’s age, training systems and cultural practices. This study was carried out during three consecutive years, then wine from three different vintage were made and analysed. Different red wines were made from grapes harvested in the different vineyards under study. Grapes were harvested to ‘Technological maturity’, which are correlated with adequate levels of sugar, acidity, phenolic content (nowadays named phenolic maturity), good sanitary stages and adequate levels of aroma precursors. Results of the volatile profile of the wines produced from grapes of each HSU showed significant differences among them pointing out a direct effect of the edaphoclimatic characteristic of each UHT on the composition of the grapes and then on the volatile composition of the wines. Variability induced by HSU co-existed with the well-known inter-annual variability correlated mainly with the specific climatic conditions of each vintage, however was most intense, so the wine of each HSU were perfectly differenced. A discriminant analysis allowed to define the volatiles with discriminant capacities which were 21 of the 74 volatiles analysed. Detected discriminant volatiles were chemical different, although .most of them were esters, followed by were superior alcohols and fatty acid of short chain. Only one lactone and two aldehydes were selected as discriminant variable, and no varietal aroma compounds were selected, which agree with the fact that all the wine were made from the same grape variety.Keywords: viticulture zoning, terroir, wine, volatile profile
Procedia PDF Downloads 221627 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 65626 Single-parent Families and the Criminal Ramifications on Children in the United Kingdom; A Systematic Review
Authors: Naveed Ali
Abstract:
Under the construct of the ‘traditional family’ set-up (male and female parent) in the United Kingdom, the absence of a male parental figure remains a critical factor associated with an elevated risk of criminal behavior among youths. Empirical evidence suggests that father absence significantly correlates with increased rates of juvenile delinquency and criminality. For instance, data reveals that approximately 63% of young offenders in the United Kingdom originate from single-parent households, predominantly those without a father. Moreover, research displays that boys from father-absent homes are three times more likely to exhibit antisocial behavior compared to their peers from two-parent families. This absence can negatively impact educational attainment, with children from fatherless homes being twice as likely to leave school prematurely, thereby increasing their vulnerability to peer influence and gang affiliation- key pathways into criminal activities. Both legal frameworks and social policies in the United Kingdom acknowledge the pivotal role of family stability in crime prevention. Initiatives including parenting support programs, community-based interventions, and targeted youth services seek to address the challenges faced by single-parent families and mitigate the criminogenic effects of father absence. Despite these efforts, persistent challenges remain, including the need to address the broader socioeconomic determinants of family instability and to refine legal strategies that effectively address the root causes of youth offending linked to the absence of a male parental figure. A nuanced understanding of these dynamics is essential for developing more effective legal and social interventions aimed at reducing juvenile delinquency and supporting at-risk populations within the United Kingdom. This paper will highlight the significant impact of the absence of a male parental figure on youth crime rates in the United Kingdom, underlining the need for enhanced legal and social responses. By examining the interplay between family structure and juvenile offending, the paper will underline the importance of developing more comprehensive interventions that address both familial factors and the wider socioeconomic context. The findings aim to guide policymakers and practitioners in creating more effective strategies to reduce youth crime, ultimately strengthening support systems for vulnerable families and mitigating the adverse effects of father absence on young individuals.Keywords: criminality, family law, legal framework, the united kingdom perspective
Procedia PDF Downloads 28625 Investigating the Feasibility of Berry Production in Central Oregon under Protected and Unprotected Culture
Authors: Clare S. Sullivan
Abstract:
The high desert of central Oregon, USA is a challenging growing environment: short growing season (70-100 days); average annual precipitation of 280 mm; drastic swings in diurnal temperatures; possibility of frost any time of year; and sandy soils low in organic matter. Despite strong demand, there is almost no fruit grown in central Oregon due to potential yield loss caused by early and late frosts. Elsewhere in the USA, protected culture (i.e., high tunnels) has been used to extend fruit production seasons and improve yields. In central Oregon, high tunnels are used to grow multiple high-value vegetable crops, and farmers are unlikely to plant a perennial crop in a high tunnel unless proven profitable. In May 2019, two berry trials were established on a farm in Alfalfa, OR, to evaluate raspberry and strawberry yield, season length, and fruit quality in protected (high tunnels) vs. unprotected culture (open field). The main objective was to determine whether high tunnel berry production is a viable enterprise for the region. Each trial was arranged using a split-plot design. The main factor was the production system (high tunnel vs. open field), and the replicated, subplot factor was berry variety. Four day-neutral strawberry varieties and four primocane-bearing raspberry varieties were planted for the study and were managed using organic practices. Berries were harvested once a week early in the season, and twice a week as production increased. Harvested berries were separated into ‘marketable’ and ‘unmarketable’ in order to calculate percent cull. First-year results revealed berry yield and quality differences between varieties and production systems. Strawberry marketable yield and berry fruit size increased significantly in the high tunnel compared to the field; percent yield increase ranged from 7-46% by variety. Evie 2 was the highest yielding strawberry, although berry quality was lower than other berries. Raspberry marketable yield and berry fruit size tended to increase in the high tunnel compared to the field, although variety had a more significant effect. Joan J was the highest yielding raspberry and out-yielded the other varieties by 250% outdoor and 350% indoor. Overall, strawberry and raspberry yields tended to improve in high tunnels as compared to the field, but data from a second year will help determine whether high tunnel investment is worthwhile. It is expected that the production system will have more of an effect on berry yield and season length for second-year plants in 2020.Keywords: berries, high tunnel, local food, organic
Procedia PDF Downloads 118624 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing
Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel
Abstract:
There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.Keywords: climate change, resilience, remote sensing, demographic and health surveys
Procedia PDF Downloads 165623 Glass-Ceramics for Emission in the IR Region
Authors: V. Nikolov, I. Koseva, R. Sole, F. Diaz
Abstract:
Cr4+ doped oxide compounds are particularly preferred active media for solid-state lasers with a wide emission region from 1.1 to 1.6 µm. However, obtaining of single crystals of these compounds is often problematic. An alternative solution of this problem is replacing the single crystals with a transparent glassceramics containing the desired crystalline phase. Germanate compounds, especially Li2MgGeO4, Li2ZnGeO4 and Li2CaGeO4, are suitable for Cr4+ doped glass-ceramics because of their relatively low melting temperature and tetrahedral coordination of all ions. The latter ensures the presence of chromium in the 4+ valence. Cr doped Li2CaGeO4 g lass-ceramic was synthesized by thermal treating using glasses from the Li2O-CaO-GeO2-B2O3 system. Special investigations were carried out for optimizing the initial glasscomposition, as well as the thermal treated conditions. The synthesis of the glass ceramics was accompanied by appropriate characterization methods such as: XRD, TEM, EPR, UVVIS-NIR, emission spectra and time decay as main characteristic for the laser emission. From the systematic studies carried out in the four-component system Li2O-CaO-GeO2-B2O3 for establishing the Li2CaGeO4 crystallization area and suitable thermal treatment conditions, several main conclusions can be drawn: 1. The crystallization region of Li2CaGeO4 is relatively narrow, localized around the stoichiometric composition of the Li2CaGeO4 compound. 2. The presence of the glass former B2O3 strongly supports the obtaining of homogeneous glasses at relatively low temperatures, but it is also the reason for the crystallization of borate phases. 3. The crystallization of glasses during thermal treatment is related to the production of more than one phase and it is correct to speak for crystallization of a main phase and accompanying crystallization of other phases. The crystallization of a given phase is related to changing the composition of the residual glass and creating conditions for the crystallization of other phases. 4. The separate studies show that glass-ceramics with different crystallized phases in different quantitative ratios can be obtained from the same composition of glass playing by the thermal treatment conditions. In other words, the choice of temperature and time of thermal treatment of the glass is an extremely important condition, along with the optimization of the starting glass composition. As a result of the conducted research, an optimal composition of the starting glass and an optimal mode of thermal treatment were selected. Glass-ceramic with a main phase Li2CaGeO4 doped by Cr4+ was obtained. The obtained glass-ceramic possess very good properties containing up to 60 mass% of Li2CaGeO4, with an average size of nanoparticles of 20 nm and with transparency about 70 % relative to the transparency of the parent glass. The emission of the obtained glass-ceramics is in a wide range between 1050 and 1500 nm. The obtained results are the basis for further optimization of the glass-ceramic characteristics to obtain an effective laser-active medium with radiation in the 1.1-1.6 nm range.Keywords: glass, glass-ceramics, multicomponent systems, NIR emission
Procedia PDF Downloads 19622 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN
Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran
Abstract:
In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS
Procedia PDF Downloads 36621 Traditional Medicine in Children: A Significant Cause of Morbidity and Mortality
Authors: Atitallah Sofien, Bouyahia Olfa, Romdhani Meriam, Missaoui Nada, Ben Rabeh Rania, Yahyaoui Salem, Mazigh Sonia, Boukthir Samir
Abstract:
Introduction: Traditional medicine refers to a diverse range of therapeutic practices and knowledge systems that have been employed by different cultures over an extended period to uphold and rejuvenate health. These practices can involve herbal remedies, acupuncture, massage, and alternative healing methods that deviate from conventional medical approaches. In Tunisia, we often use unidentified utensils to scratch the oral cavity internally in infants in order to widen the oral cavity for better breathing and swallowing. However, these practices can be risky and may jeopardize the patients' prognosis or even their lives. Aim: This is the case of a nine-month-old infant, admitted to the pediatric department and subsequently to the intensive care unit due to a peritonsillar abscess following the utilization of an unidentifiable tool to scrape the interior of the oral cavity. Case Report: This is a 9-month-old infant with no particular medical history, admitted for high respiratory distress and a fever persisting for 4 days. On clinical examination, he had a respiratory rate of 70 cycles per minute with an oxygen saturation of 97% and subcostal retractions, along with a heart rate of 175 beats per minute. His white blood cell count was 40,960/mm³, and his C-reactive protein was 250 mg/L. Given the severity of the clinical presentation, the infant was transferred to the intensive care unit, intubated, and mechanically ventilated. A cervical-thoracic CT scan was performed, revealing a ruptured 18 mm left peritonsillar abscess in the oropharynx associated with cellulitis of the retropharyngeal space. The oto-rhino-laryngoscopic examination revealed an asymmetry involving the left lateral wall of the oropharynx with the presence of a fistula behind the posterior pillar. Dissection of the collection cavity was performed, allowing the drainage of 2 ml of pus. The culture was negative. The patient received cefotaxime in combination with metronidazole and gentamicin for a duration of 10 days, followed by a switch to amoxicillin-clavulanic acid for 7 days. The patient was extubated after 4 days of treatment, and the clinical and radiological progress was favorable. Conclusions: Traditional medicine remains risky due to the lack of scientific evidence and the potential for injuries and transmission of infectious diseases, especially in children, who constitute a vulnerable population. Therefore, parents should consult healthcare professionals and rely on evidence-based care.Keywords: children, peritonsillar abscess, traditional medicine, respiratory distress
Procedia PDF Downloads 63620 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 526619 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending
Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim
Abstract:
Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions
Procedia PDF Downloads 104618 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 105617 Assessment of Urban Environmental Noise in Urban Habitat: A Spatial Temporal Study
Authors: Neha Pranav Kolhe, Harithapriya Vijaye, Arushi Kamle
Abstract:
The economic growth engines are urban regions. As the economy expands, so does the need for peace and quiet, and noise pollution is one of the important social and environmental issue. Health and wellbeing are at risk from environmental noise pollution. Because of urbanisation, population growth, and the consequent rise in the usage of increasingly potent, diverse, and highly mobile sources of noise, it is now more severe and pervasive than ever before, and it will only become worse. Additionally, it will expand as long as there is an increase in air, train, and highway traffic, which continue to be the main contributors of noise pollution. The current study will be conducted in two zones of class I city of central India (population range: 1 million–4 million). Total 56 measuring points were chosen to assess noise pollution. The first objective evaluates the noise pollution in various urban habitats determined as formal and informal settlement. It identifies the comparison of noise pollution within the settlements using T- Test analysis. The second objective assess the noise pollution in silent zones (as stated in Central Pollution Control Board) in a hierarchical way. It also assesses the noise pollution in the settlements and compares with prescribed permissible limits using class I sound level equipment. As appropriate indices, equivalent noise level on the (A) frequency weighting network, minimum sound pressure level and maximum sound pressure level were computed. The survey is conducted for a period of 1 week. Arc GIS is used to plot and map the temporal and spatial variability in urban settings. It is discovered that noise levels at most stations, particularly at heavily trafficked crossroads and subway stations, were significantly different and higher than acceptable limits and squares. The study highlights the vulnerable areas that should be considered while city planning. The study demands area level planning while preparing a development plan. It also demands attention to noise pollution from the perspective of residential and silent zones. The city planning in urban areas neglects the noise pollution assessment at city level. This contributes to that, irrespective of noise pollution guidelines, the ground reality is far away from its applicability. The result produces incompatible land use on a neighbourhood scale with respect to noise pollution. The study's final results will be useful to policymakers, architects and administrators in developing countries. This will be useful for noise pollution in urban habitat governance by efficient decision making and policy formulation to increase the profitability of these systems.Keywords: noise pollution, formal settlements, informal settlements, built environment, silent zone, residential area
Procedia PDF Downloads 118616 The Impact of Artificial Intelligence on Food Industry
Authors: George Hanna Abdelmelek Henien
Abstract:
Quality and safety issues are common in Ethiopia's food processing industry, which can negatively impact consumers' health and livelihoods. The country is known for its various agricultural products that are important to the economy. However, food quality and safety policies and management practices in the food processing industry have led to many health problems, foodborne illnesses and economic losses. This article aims to show the causes and consequences of food safety and quality problems in the food processing industry in Ethiopia and discuss possible solutions to solve them. One of the main reasons for food quality and safety in Ethiopia's food processing industry is the lack of adequate regulation and enforcement mechanisms. Inadequate food safety and quality policies have led to inefficiencies in food production. Additionally, the failure to monitor and enforce existing regulations has created a good opportunity for unscrupulous companies to engage in harmful practices that endanger the lives of citizens. The impact on food quality and safety is significant due to loss of life, high medical costs, and loss of consumer confidence in the food processing industry. Foodborne diseases such as diarrhoea, typhoid and cholera are common in Ethiopia, and food quality and safety play an important role in . Additionally, food recalls due to contamination or contamination often cause significant economic losses in the food processing industry. To solve these problems, the Ethiopian government began taking measures to improve food quality and safety in the food processing industry. One of the most prominent initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to monitor and control the quality and safety of food and beverage products in the country. EFDA has implemented many measures to improve food safety, such as carrying out routine inspections, monitoring the import of food products and implementing labeling requirements. Another solution that can improve food quality and safety in the food processing industry in Ethiopia is the implementation of food safety management system (FSMS). FSMS is a set of procedures and policies designed to identify, assess and control food safety risks during food processing. Implementing a FSMS can help companies in the food processing industry identify and address potential risks before they harm consumers. Additionally, implementing an FSMS can help companies comply with current safety and security regulations. Consequently, improving food safety policy and management system in Ethiopia's food processing industry is important to protect people's health and improve the country's economy. . Addressing the root causes of food quality and safety and implementing practical solutions that can help improve the overall food safety and quality in the country, such as establishing regulatory bodies and implementing food management systems.Keywords: food quality, food safety, policy, management system, food processing industry food traceability, industry 4.0, internet of things, block chain, best worst method, marcos
Procedia PDF Downloads 62615 Regional Analysis of Freight Movement by Vehicle Classification
Authors: Katerina Koliou, Scott Parr, Evangelos Kaisar
Abstract:
The surface transportation of freight is particularly vulnerable to storm and hurricane disasters, while at the same time, it is the primary transportation mode for delivering medical supplies, fuel, water, and other essential goods. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The research investigation used Florida's statewide continuous-count station traffic volumes, where then compared between years, to identify locations where traffic was moving differently during the evacuation. The data was then used to identify days on which traffic was significantly different between years. While the literature on auto-based evacuations is extensive, the consideration of freight travel is lacking. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The goal of this research was to investigate the movement of vehicles by classification, with an emphasis on freight during two major evacuation events: hurricanes Irma (2017) and Michael (2018). The methodology of the research was divided into three phases: data collection and management, spatial analysis, and temporal comparisons. Data collection and management obtained continuous-co station data from the state of Florida for both 2017 and 2018 by vehicle classification. The data was then processed into a manageable format. The second phase used geographic information systems (GIS) to display where and when traffic varied across the state. The third and final phase was a quantitative investigation into which vehicle classifications were statistically different and on which dates statewide. This phase used a two-sample, two-tailed t-test to compare sensor volume by classification on similar days between years. Overall, increases in freight movement between years prevented a more precise paired analysis. This research sought to identify where and when different classes of vehicles were traveling leading up to hurricane landfall and post-storm reentry. Of the more significant findings, the research results showed that commercial-use vehicles may have underutilized rest areas during the evacuation, or perhaps these rest areas were closed. This may suggest that truckers are driving longer distances and possibly longer hours before hurricanes. Another significant finding of this research was that changes in traffic patterns for commercial-use vehicles occurred earlier and lasted longer than changes for personal-use vehicles. This finding suggests that commercial vehicles are perhaps evacuating in a fashion different from personal use vehicles. This paper may serve as the foundation for future research into commercial travel during evacuations and explore additional factors that may influence freight movements during evacuations.Keywords: evacuation, freight, travel time, evacuation
Procedia PDF Downloads 68614 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050
Authors: Ali Hashemifarzad, Jens Zum Hingst
Abstract:
The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production
Procedia PDF Downloads 134613 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic
Authors: Hanan Al-Jabri
Abstract:
Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting
Procedia PDF Downloads 159612 Alternate Approaches to Quality Measurement: An Exploratory Study in Differentiation of “Quality” Characteristics in Services and Supports
Authors: Caitlin Bailey, Marian Frattarola Saulino, Beth Steinberg
Abstract:
Today, virtually all programs offered to people with intellectual and developmental disabilities tout themselves as person-centered, community-based and inclusive, yet there is a vast range in type and quality of services that use these similar descriptors. The issue is exacerbated by the fields’ measurement practices around quality, inclusion, independent living, choice and person-centered outcomes. For instance, community inclusion for people with disabilities is often measured by the number of times person steps into his or her community. These measurement approaches set standards for quality too low so that agencies supporting group home residents to go bowling every week can report the same outcomes as an agency that supports one person to join a book club that includes people based on their literary interests rather than disability labels. Ultimately, lack of delineation in measurement contributes to the confusion between face value “quality” and true quality services and supports for many people with disabilities and their families. This exploratory study adopts alternative approaches to quality measurement including co-production methods and systems theoretical framework in order to identify the factors that 1) lead to high-quality supports and, 2) differentiate high-quality services. Project researchers have partnered with community practitioners who are all committed to providing quality services and supports but vary in the degree to which they are actually able to provide them. The study includes two parts; first, an online survey distributed to more than 500 agencies that have demonstrated commitment to providing high-quality services; and second, four in-depth case studies with agencies in three United States and Israel providing a variety of supports to children and adults with disabilities. Results from both the survey and in-depth case studies were thematically analyzed and coded. Results show that there are specific factors that differentiate service quality; however meaningful quality measurement practices also require that researchers explore the contextual factors that contribute to quality. These not only include direct services and interactions, but also characteristics of service users, their environments as well as organizations providing services, such as management and funding structures, culture and leadership. Findings from this study challenge researchers, policy makers and practitioners to examine existing quality service standards and measurements and to adopt alternate methodologies and solutions to differentiate and scale up evidence-based quality practices so that all people with disabilities have access to services that support them to live, work, and enjoy where and with whom they choose.Keywords: co-production, inclusion, independent living, quality measurement, quality supports
Procedia PDF Downloads 399611 Addressing Supply Chain Data Risk with Data Security Assurance
Authors: Anna Fowler
Abstract:
When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.Keywords: security by design, data security architecture, cybersecurity framework, data security assurance
Procedia PDF Downloads 88610 Effect of Antimony on Microorganisms in Aerobic and Anaerobic Environments
Authors: Barrera C. Monserrat, Sierra-Alvarez Reyes, Pat-Espadas Aurora, Moreno Andrade Ivan
Abstract:
Antimony is a toxic and carcinogenic metalloid considered a pollutant of priority interest by the United States Environmental Protection Agency. It is present in the environment in two oxidation states: antimonite (Sb (III)) and antimony (Sb (V)). Sb (III) is toxic to several aquatic organisms, but the potential inhibitory effect of Sb species for microorganisms has not been extensively evaluated. The fate and possible toxic impact of antimony on aerobic and anaerobic wastewater treatment systems are unknown. For this reason, the objective of this study was to evaluate the microbial toxicity of Sb (V) and Sb (III) in aerobic and anaerobic environments. Sb(V) and Sb(III) were used as potassium hexahydroxoantimonate (V) and potassium antimony tartrate, respectively (Sigma-Aldrich). The toxic effect of both Sb species in anaerobic environments was evaluated on methanogenic activity and the inhibition of hydrogen production of microorganisms from a wastewater treatment bioreactor. For the methanogenic activity, batch experiments were carried out in 160 mL serological bottles; each bottle contained basal mineral medium (100 mL), inoculum (1.5 g of VSS/L), acetate (2.56 g/L) as substrate, and variable concentrations of Sb (V) or Sb (III). Duplicate bioassays were incubated at 30 ± 2°C on an orbital shaker (105 rpm) in the dark. Methane production was monitored by gas chromatography. The hydrogen production inhibition tests were carried out in glass bottles with a working volume of 0.36 L. Glucose (50 g/L) was used as a substrate, pretreated inoculum (5 g VSS/L), mineral medium and varying concentrations of the two species of antimony. The bottles were kept under stirring and at a temperature of 35°C in an AMPTSII device that recorded hydrogen production. The toxicity of Sb on aerobic microorganisms (from a wastewater activated sludge treatment plant) was tested with a Microtox standardized toxicity test and respirometry. Results showed that Sb (III) is more toxic than Sb (V) for methanogenic microorganisms. Sb (V) caused a 50% decrease in methanogenic activity at 250 mg/L. In contrast, exposure to Sb (III) resulted in a 50% inhibition at a concentration of only 11 mg/L, and an almost complete inhibition (95%) at 25 mg/L. For hydrogen-producing microorganisms, Sb (III) and Sb (V) inhibited 50% of this production with 12.6 mg/L and 87.7 mg/L, respectively. The results for aerobic environments showed that 500 mg/L of Sb (V) do not inhibit the Allivibrio fischeri (Microtox) activity or specific oxygen uptake rate of activated sludge. In the case of Sb (III), this caused a loss of 50% of the respiration of the microorganisms at concentrations below 40 mg/L. The results obtained indicate that the toxicity of the antimony will depend on the speciation of this metalloid and that Sb (III) has a significantly higher inhibitory potential compared to Sb (V). It was shown that anaerobic microorganisms can reduce Sb (V) to Sb (III). Acknowledgments: This work was funded in part by grants from the UA-CONACYT Binational Consortium for the Regional Scientific Development and Innovation (CAZMEX), the National Institute of Health (NIH ES- 04940), and PAPIIT-DGAPA-UNAM (IN105220).Keywords: aerobic inhibition, antimony reduction, hydrogen inhibition, methanogenic toxicity
Procedia PDF Downloads 166609 The Impact of Housing Design on the Health and Well-Being of Populations: A Case-Study of Middle-Class Families in the Metropolitan Region of Port-Au-Prince, Haiti
Authors: A. L. Verret, N. Prince, Y. Jerome, A. Bras
Abstract:
The effects of housing design on the health and well-being of populations are quite intangible. In fact, healthy housing parameters are generally difficult to establish scientifically. It is often unclear the direction of a cause-and-effect relationship between health variables and housing. However, the lack of clear and definite measurements does not entail the absence of relationship between housing, health, and well-being. Research has thus been conducted. It has mostly aimed the physical rather than the psychological or social well-being of a population, given the difficulties to establish cause-effect relationships because of the subjectivity of the psychological symptoms and of the challenge in determining the influence of other factors. That said, a strong relationship has been exposed between light and physiology. Both the nervous and endocrine systems, amongst others, are affected by different wavelengths of natural light within a building. Daylight in the workplace is indeed associated to decreased absenteeism, errors and product defects, fatigue, eyestrain, increased productivity and positive attitude. Similar associations can also be made to residential housing. Lower levels of sunlight within the home have been proven to result in impaired cognition in depressed participants of a cross-sectional case study. Moreover, minimum space (area and volume) has been linked to healthy housing and quality of life, resulting in norms and regulations for such parameters for home constructions. As a matter of fact, it is estimated that people spend the two-thirds of their lives within the home and its immediate environment. Therefore, it is possible to deduct that the health and well-being of the occupants are potentially at risk in an unhealthy housing situation. While the impact of architecture on health and well-being is acknowledged and considered somewhat crucial in various countries of the north and the south, this issue is barely raised in Haiti. In fact, little importance is given to architecture for many reasons (lack of information, lack of means, societal reflex, poverty…). However, the middle-class is known for its residential strategies and trajectories in search of better-quality homes and environments. For this reason, it would be pertinent to use this group and its strategies and trajectories to isolate the impact of housing design on the overall health and well-being. This research aims to analyze the impact of housing architecture on the health and well-being of middle-class families in the metropolitan region of Port-au-Prince. It is a case study which uses semi-structured interviews and observations as research methods. Although at an early stage, this research anticipates that homes affect their occupants both psychologically and physiologically, and consequently, public policies and the population should take into account the architectural design in the planning and construction of housing and, furthermore, cities.Keywords: architectural design, health and well-being, middle-class housing, Port-au-Prince, Haiti
Procedia PDF Downloads 139608 Atypical Retinoid ST1926 Nanoparticle Formulation Development and Therapeutic Potential in Colorectal Cancer
Authors: Sara Assi, Berthe Hayar, Claudio Pisano, Nadine Darwiche, Walid Saad
Abstract:
Nanomedicine, the application of nanotechnology to medicine, is an emerging discipline that has gained significant attention in recent years. Current breakthroughs in nanomedicine have paved the way to develop effective drug delivery systems that can be used to target cancer. The use of nanotechnology provides effective drug delivery, enhanced stability, bioavailability, and permeability, thereby minimizing drug dosage and toxicity. As such, the use of nanoparticle (NP) formulations in drug delivery has been applied in various cancer models and have shown to improve the ability of drugs to reach specific targeted sites in a controlled manner. Cancer is one of the major causes of death worldwide; in particular, colorectal cancer (CRC) is the third most common type of cancer diagnosed amongst men and women and the second leading cause of cancer related deaths, highlighting the need for novel therapies. Retinoids, consisting of natural and synthetic derivatives, are a class of chemical compounds that have shown promise in preclinical and clinical cancer settings. However, retinoids are limited by their toxicity and resistance to treatment. To overcome this resistance, various synthetic retinoids have been developed, including the adamantyl retinoid ST1926, which is a potent anti-cancer agent. However, due to its limited bioavailability, the development of ST1926 has been restricted in phase I clinical trials. We have previously investigated the preclinical efficacy of ST1926 in CRC models. ST1926 displayed potent inhibitory and apoptotic effects in CRC cell lines by inducing early DNA damage and apoptosis. ST1926 significantly reduced the tumor doubling time and tumor burden in a xenograft CRC model. Therefore, we developed ST1926-NPs and assessed their efficacy in CRC models. ST1926-NPs were produced using Flash NanoPrecipitation with the amphiphilic diblock copolymer polystyrene-b-ethylene oxide and cholesterol as a co-stabilizer. ST1926 was formulated into NPs with a drug to polymer mass ratio of 1:2, providing a stable formulation for one week. The contin ST1926-NP diameter was 100 nm, with a polydispersity index of 0.245. Using the MTT cell viability assay, ST1926-NP exhibited potent anti-growth activities as naked ST1926 in HCT116 cells, at pharmacologically achievable concentrations. Future studies will be performed to study the anti-tumor activities and mechanism of action of ST1926-NPs in a xenograft mouse model and to detect the compound and its glucuroconjugated form in the plasma of mice. Ultimately, our studies will support the use of ST1926-NP formulations in enhancing the stability and bioavailability of ST1926 in CRC.Keywords: nanoparticles, drug delivery, colorectal cancer, retinoids
Procedia PDF Downloads 100607 Possibility of Membrane Filtration to Treatment of Effluent from Digestate
Authors: Marcin Debowski, Marcin Zielinski, Magdalena Zielinska, Paulina Rusanowska
Abstract:
The problem with digestate management is one of the most important factors influencing on the development and operation of biogas plant. Turbidity and bacterial contamination negatively affect the growth of algae, which can limit the use of the effluent in the production of algae biomass on a large scale. These problems can be overcome by cultivating of algae species resistant to environmental factors, such as Chlorella sp., Scenedesmus sp., or reducing load of organic compounds to prevent bacterial contamination. The effluent requires dilution and/or purification. One of the methods of effluent treatment is the use of a membrane technology such as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO), depending on the membrane pore size and the cut off point. Membranes are a physical barrier to solids and particles larger than the size of the pores. MF membranes have the largest pores and are used to remove turbidity, suspensions, bacteria and some viruses. UF membranes remove also color, odor and organic compounds with high molecular weight. In treatment of wastewater or other waste streams, MF and UF can provide a sufficient degree of purification. NF membranes are used to remove natural organic matter from waters, water disinfection products and sulfates. RO membranes are applied to remove monovalent ions such as Na⁺ or K⁺. The effluent was used in UF for medium to cultivation of two microalgae: Chlorella sp. and Phaeodactylum tricornutum. Growth rates of Chlorella sp. and P. tricornutum were similar: 0.216 d⁻¹ and 0.200 d⁻¹ (Chlorella sp.); 0.128 d⁻¹ and 0.126 d⁻¹ (P. tricornutum), on synthetic medium and permeate from UF, respectively. The final biomass composition was also similar, regardless of the medium. Removal of nitrogen was 92% and 71% by Chlorella sp. and P. tricornutum, respectively. The fermentation effluents after UF and dilution were also used for cultivation of algae Scenedesmus sp. that is resistant to environmental conditions. The authors recommended the development of biorafinery based on the production of algae for the biogas production. There are examples of using a multi-stage membrane system to purify the liquid fraction from digestate. After the initial UF, RO is used to remove ammonium nitrogen and COD. To obtain a permeate with a concentration of ammonium nitrogen allowing to discharge it into the environment, it was necessary to apply three-stage RO. The composition of the permeate after two-stage RO was: COD 50–60 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 300–320 mg/dm³, total nitrogen 320–340 mg/dm³, total phosphorus 53 mg/dm³. However compostion of permeate after three-stage RO was: COD < 5 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 0 mg/dm³, total nitrogen 3.5 mg/dm³, total phosphorus < 0,05 mg/dm³. Last stage of RO might be replaced by ion exchange process. The negative aspect of membrane filtration systems is the fact that the permeate is about 50% of the introduced volume, the remainder is the retentate. The management of a retentate might involve recirculation to a biogas plant.Keywords: digestate, membrane filtration, microalgae cultivation, Chlorella sp.
Procedia PDF Downloads 352606 Phase Synchronization of Skin Blood Flow Oscillations under Deep Controlled Breathing in Human
Authors: Arina V. Tankanag, Gennady V. Krasnikov, Nikolai K. Chemeris
Abstract:
The development of respiration-dependent oscillations in the peripheral blood flow may occur by at least two mechanisms. The first mechanism is related to the change of venous pressure due to mechanical activity of lungs. This phenomenon is known as ‘respiratory pump’ and is one of the mechanisms of venous return of blood from the peripheral vessels to the heart. The second mechanism is related to the vasomotor reflexes controlled by the respiratory modulation of the activity of centers of the vegetative nervous system. Early high phase synchronization of respiration-dependent blood flow oscillations of left and right forearm skin in healthy volunteers at rest was shown. The aim of the work was to study the effect of deep controlled breathing on the phase synchronization of skin blood flow oscillations. 29 normotensive non-smoking young women (18-25 years old) of the normal constitution without diagnosed pathologies of skin, cardiovascular and respiratory systems participated in the study. For each of the participants six recording sessions were carried out: first, at the spontaneous breathing rate; and the next five, in the regimes of controlled breathing with fixed breathing depth and different rates of enforced breathing regime. The following rates of controlled breathing regime were used: 0.25, 0.16, 0.10, 0.07 and 0.05 Hz. The breathing depth amounted to 40% of the maximal chest excursion. Blood perfusion was registered by laser flowmeter LAKK-02 (LAZMA, Russia) with two identical channels (wavelength 0.63 µm; emission power, 0.5 mW). The first probe was fastened to the palmar surface of the distal phalanx of left forefinger; the second probe was attached to the external surface of the left forearm near the wrist joint. These skin zones were chosen as zones with different dominant mechanisms of vascular tonus regulation. The degree of phase synchronization of the registered signals was estimated from the value of the wavelet phase coherence. The duration of all recording was 5 min. The sampling frequency of the signals was 16 Hz. The increasing of synchronization of the respiratory-dependent skin blood flow oscillations for all controlled breathing regimes was obtained. Since the formation of respiration-dependent oscillations in the peripheral blood flow is mainly caused by the respiratory modulation of system blood pressure, the observed effects are most likely dependent on the breathing depth. It should be noted that with spontaneous breathing depth does not exceed 15% of the maximal chest excursion, while in the present study the breathing depth was 40%. Therefore it has been suggested that the observed significant increase of the phase synchronization of blood flow oscillations in our conditions is primarily due to an increase of breathing depth. This is due to the enhancement of both potential mechanisms of respiratory oscillation generation: venous pressure and sympathetic modulation of vascular tone.Keywords: deep controlled breathing, peripheral blood flow oscillations, phase synchronization, wavelet phase coherence
Procedia PDF Downloads 213605 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model
Authors: S.I.Mukhin, S. Seidov, A. Mukherjee
Abstract:
The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity
Procedia PDF Downloads 134604 Fluctuations in Radical Approaches to State Ownership of the Means of Production Over the Twentieth Century
Authors: Tom Turner
Abstract:
The recent financial crisis in 2008 and the growing inequality in developed industrial societies would appear to present significant challenges to capitalism and the free market. Yet there have been few substantial mainstream political or economic challenges to the dominant capitalist and market paradigm to-date. There is no dearth of critical and theoretical (academic) analyses regarding the prevailing systems failures. Yet despite the growing inequality in the developed industrial societies and the financial crisis in 2008 few commentators have advocated the comprehensive socialization or state ownership of the means of production to our knowledge – a core principle of radical Marxism in the 19th and early part of the 20th century. Undoubtedly the experience in the Soviet Union and satellite countries in the 20th century has cast a dark shadow over the notion of centrally controlled economies and state ownership of the means of production. In this paper, we explore the history of a doctrine advocating the socialization or state ownership of the means of production that was central to Marxism and socialism generally. Indeed this doctrine provoked an intense and often acrimonious debate especially for left-wing parties throughout the 20th century. The debate within the political economy tradition has historically tended to divide into a radical and a revisionist approach to changing or reforming capitalism. The radical perspective views the conflict of interest between capital and labor as a persistent and insoluble feature of a capitalist society and advocates the public or state ownership of the means of production. Alternatively, the revisionist perspective focuses on issues of distribution rather than production and emphasizes the possibility of compromise between capital and labor in capitalist societies. Over the 20th century, the radical perspective has faded and even the social democratic revisionist tradition has declined in recent years. We conclude with the major challenges that confront both the radical and revisionist perspectives in the development of viable policy agendas in mature developed democratic societies. Additionally, we consider whether state ownership of the means of production still has relevance in the 21st century and to what extent state ownership is off the agenda as a political issue in the political mainstream in developed industrial societies. A central argument in the paper is that state ownership of the means of production is unlikely to feature as either a practical or theoretical solution to the problems of capitalism post the financial crisis among mainstream political parties of the left. Although the focus here is solely on the shifting views of the radical and revisionist socialist perspectives in the western European tradition the analysis has relevance for the wider socialist movement.Keywords: sate ownership, ownership means of production, radicals, revisionists
Procedia PDF Downloads 119603 Energy Atlas: Geographic Information Systems-Based Energy Analysis and Planning Tool
Authors: Katarina Pogacnik, Ursa Zakrajsek, Nejc Sirk, Ziga Lampret
Abstract:
Due to an increase in living standards along with global population growth and a trend of urbanization, municipalities and regions are faced with an ever rising energy demand. A challenge has arisen for cities around the world to modify the energy supply chain in order to reduce its consumption and CO₂ emissions. The aim of our work is the development of a computational-analytical platform for dynamic support in decision-making and the determination of economic and technical indicators of energy efficiency in a smart city, named Energy Atlas. Similar products in this field focuse on a narrower approach, whereas in order to achieve its aim, this platform encompasses a wider spectrum of beneficial and important information for energy planning on a local or regional scale. GIS based interactive maps provide an extensive database on the potential, use and supply of energy and renewable energy sources along with climate, transport and spatial data of the selected municipality. Beneficiaries of Energy atlas are local communities, companies, investors, contractors as well as residents. The Energy Atlas platform consists of three modules named E-Planning, E-Indicators and E-Cooperation. The E-Planning module is a comprehensive data service, which represents a support towards optimal decision-making and offers a sum of solutions and feasibility of measures and their effects in the area of efficient use of energy and renewable energy sources. The E-Indicators module identifies, collects and develops optimal data and key performance indicators and develops an analytical application service for dynamic support in managing a smart city in regards to energy use and sustainable environment. In order to support cooperation and direct involvement of citizens of the smart city, the E-cooperation is developed with the purpose of integrating the interdisciplinary and sociological aspects of energy end-users. Interaction of all the above-described modules contributes to regional development because it enables for a precise assessment of the current situation, strategic planning, detection of potential future difficulties and also the possibility of public involvement in decision-making. From the implementation of the technology in Slovenian municipalities of Ljubljana, Piran, and Novo mesto, there is evidence to suggest that the set goals are to be achieved to a great extent. Such thorough urban energy planning tool is viewed as an important piece of the puzzle towards achieving a low-carbon society, circular economy and therefore, sustainable society.Keywords: circular economy, energy atlas, energy management, energy planning, low-carbon society
Procedia PDF Downloads 305602 Knowledge Management Processes as a Driver of Knowledge-Worker Performance in Public Health Sector of Pakistan
Authors: Shahid Razzaq
Abstract:
The governments around the globe have started taking into considerations the knowledge management dynamics while formulating, implementing, and evaluating the strategies, with or without the conscious realization, for the different public sector organizations and public policy developments. Health Department of Punjab province in Pakistan is striving to deliver quality healthcare services to the community through an efficient and effective service delivery system. Despite of this struggle some employee performance issues yet exists in the form of challenge to government. To overcome these issues department took several steps including HR strategies, use of technologies and focus of hard issues. Consequently, this study was attempted to highlight the importance of soft issue that is knowledge management in its true essence to tackle their performance issues. Knowledge management in public sector is quite an ignored area in the knowledge management-a growing multidisciplinary research discipline. Knowledge-based view of the firm theory asserts the knowledge is the most deliberate resource that can result in competitive advantage for an organization over the other competing organizations. In the context of our study it means for gaining employee performance, organizations have to increase the heterogeneous knowledge bases. The study uses the cross-sectional and quantitative research design. The data is collected from the knowledge workers of Health Department of Punjab, the biggest province of Pakistan. A total of 341 sample size is achieved. The SmartPLS 3 Version 2.6 is used for analyzing the data. The data examination revealed that knowledge management processes has a strong impact on knowledge worker performance. All hypotheses are accepted according to the results. Therefore, it can be summed up that to increase the employee performance knowledge management activities should be implemented. Health Department within province of Punjab introduces the knowledge management infrastructure and systems to make effective availability of knowledge for the service staff. This knowledge management infrastructure resulted in an increase in the knowledge management process in different remote hospitals, basic health units and care centers which resulted in greater service provisions to public. This study is to have theoretical and practical significances. In terms of theoretical contribution, this study is to establish the relationship between knowledge management and performance for the first time. In case of the practical contribution, this study is to give an insight to public sector organizations and government about role of knowledge management in employ performance. Therefore, public policymakers are strongly advised to implement the activities of knowledge management for enhancing the performance of knowledge workers. The current research validated the substantial role of knowledge management in persuading and creating employee arrogances and behavioral objectives. To the best of authors’ knowledge, this study contribute to the impact of knowledge management on employee performance as its originality.Keywords: employee performance, knowledge management, public sector, soft issues
Procedia PDF Downloads 141601 Recent Advances in the Valorization of Goat Milk: Nutritional Properties and Production Sustainability
Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana
Abstract:
Goat dairy products are gaining popularity worldwide. In developing countries, but also in many marginal regions of the Mediterranean area, goats represent a great part of the economy and ensure food security. In fact, these small ruminants are able to convert efficiently poor weedy plants and small trees into traditional products of high nutritional quality, showing great resilience to different climatic and environmental conditions. In developed countries, goat milk is appreciated for the presence of health-promoting compounds, bioactive compounds such as conjugated linoleic acids, oligosaccharides, sphingolipids and polyammines. This paper focuses on the recent advances in literature on the nutritional properties of goat milk and on innovative techniques to improve its quality as to become a promising functional food. The environmental sustainability of different methodologies of production has also been examined. Goat milk is valued today as a food of high nutritional value and functional properties as well as small environmental footprint. It is widely consumed in many countries due to high nutritional value, lower allergenic potential, and better digestibility when compared to bovine milk, that makes this product suitable for infants, elderly or sensitive patients. The main differences in chemical composition between a cow and goat milk rely on fat globules that in goat milk are smaller and in fatty acids that present a smaller chain length, while protein, fat, and lactose concentration are comparable. Milk nutritional properties have demonstrated to be strongly influenced by animal diet, genotype, and welfare, but also by season and production systems. Furthermore, there is a growing interest in the dairy industry in goat milk for its relatively high concentration of prebiotics and a good amount of probiotics, which have recently gained importance for their therapeutic potential. Therefore, goat milk is studied as a promising matrix to develop innovative functional foods. In addition to the economic and nutritional value, goat milk is considered a sustainable product for its small environmental footprint, as they require relatively little water and land, and less medical treatments, compared to cow, these characteristics make its production naturally vocated to organic farming. Organic goat milk production has becoming more and more interesting both for farmers and consumers as it can answer to several concerns like environment protection, animal welfare and economical sustainment of rural populations living in marginal lands. These evidences make goat milk an ancient food with novel properties and advantages to be valorized and exploited.Keywords: goat milk, nutritional quality, bioactive compounds, sustainable production, animal welfare
Procedia PDF Downloads 149