Search results for: historical film
347 Preliminary Seismic Vulnerability Assessment of Existing Historic Masonry Building in Pristina, Kosovo
Authors: Florim Grajcevci, Flamur Grajcevci, Fatos Tahiri, Hamdi Kurteshi
Abstract:
The territory of Kosova is actually included in one of the most seismic-prone regions in Europe. Therefore, the earthquakes are not so rare in Kosova; and when they occurred, the consequences have been rather destructive. The importance of assessing the seismic resistance of existing masonry structures has drawn strong and growing interest in the recent years. Engineering included those of Vulnerability, Loss of Buildings and Risk assessment, are also of a particular interest. This is due to the fact that this rapidly developing field is related to great impact of earthquakes on the socioeconomic life in seismic-prone areas, as Kosova and Prishtina are, too. Such work paper for Prishtina city may serve as a real basis for possible interventions in historic buildings as are museums, mosques, old residential buildings, in order to adequately strengthen and/or repair them, by reducing the seismic risk within acceptable limits. The procedures of the vulnerability assessment of building structures have concentrated on structural system, capacity, and the shape of layout and response parameters. These parameters will provide expected performance of the very important existing building structures on the vulnerability and the overall behavior during the earthquake excitations. The structural systems of existing historical buildings in Pristina, Kosovo, are dominantly unreinforced brick or stone masonry with very high risk potential from the expected earthquakes in the region. Therefore, statistical analysis based on the observed damage-deformation, cracks, deflections and critical building elements, would provide more reliable and accurate results for the regional assessments. The analytical technique was used to develop a preliminary evaluation methodology for assessing seismic vulnerability of the respective structures. One of the main objectives is also to identify the buildings that are highly vulnerable to damage caused from inadequate seismic performance-response. Hence, the damage scores obtained from the derived vulnerability functions will be used to categorize the evaluated buildings as “stabile”, “intermediate”, and “unstable”. The vulnerability functions are generated based on the basic damage inducing parameters, namely number of stories (S), lateral stiffness (LS), capacity curve of total building structure (CCBS), interstory drift (IS) and overhang ratio (OR).Keywords: vulnerability, ductility, seismic microzone, ductility, energy efficiency
Procedia PDF Downloads 407346 Advertising Disability Index: A Content Analysis of Disability in Television Commercial Advertising from 2018
Authors: Joshua Loebner
Abstract:
Tectonic shifts within the advertising industry regularly and repeatedly present a deluge of data to be intuited across a spectrum of key performance indicators with innumerable interpretations where live campaigns are vivisected to pivot towards coalescence amongst a digital diaspora. But within this amalgam of analytics, validation, and creative campaign manipulation, where do diversity and disability inclusion fit in? In 2018 several major brands were able to answer this question definitely and directly by incorporating people with disabilities into advertisements. Disability inclusion, representation, and portrayals are documented annually across a number of different media, from film to primetime television, but ongoing studies centering on advertising have not been conducted. Symbols and semiotics in advertising often focus on a brand’s features and benefits, but this analysis on advertising and disability shows, how in 2018, creative campaigns and the disability community came together with the goal to continue the momentum and spark conversations. More brands are welcoming inclusion and sharing positive portrayals of intersectional diversity and disability. Within the analysis and surrounding scholarship, a multipoint analysis of each advertisement and meta-interpretation of the research has been conducted to provide data, clarity, and contextualization of insights. This research presents an advertising disability index that can be monitored for trends and shifts in future studies and to provide further comparisons and contrasts of advertisements. An overview of the increasing buying power within the disability community and population changes among this group anchors the significance and size of the minority in the US. When possible, viewpoints from creative teams and advertisers that developed the ads are brought into the research to further establish understanding, meaning, and individuals’ purposeful approaches towards disability inclusion. Finally, the conclusion and discussion present key takeaways to learn from the research, build advocacy and action both within advertising scholarship and the profession. This study, developed into an advertising disability index, will answer questions of how people with disabilities are represented in each ad. In advertising that includes disability, there is a creative pendulum. At one extreme, among many other negative interpretations, people with disables are portrayed in a way that conveys pity, fosters ableism and discrimination, and shows that people with disabilities are less than normal from a societal and cultural perspective. At the other extreme, people with disabilities are portrayed with a type of undue inspiration, considered inspiration porn, or superhuman, otherwise known as supercrip, and in ways that most people with disabilities could never achieve, or don’t want to be seen for. While some ads reflect both extremes, others stood out for non-polarizing inclusion of people with disabilities. This content analysis explores television commercial advertisements to determine the presence of people with disabilities and any other associated disability themes and/or concepts. Content analysis will allow for measuring the presence and interpretation of disability portrayals in each ad.Keywords: advertising, brand, disability, marketing
Procedia PDF Downloads 115345 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance
Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty
Abstract:
One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.Keywords: fouling, monitoring, QCM, water quality
Procedia PDF Downloads 212344 The Stereotypical Images of Marginalized Women in the Poetry of Rita Dove
Authors: Wafaa Kamal Isaac
Abstract:
This paper attempts to shed light upon the stereotypical images of marginalized black women as shown through the poetry of Rita Dove. Meanwhile, it explores how stereotypical images held by the society and public perceptions perpetuate the marginalization of black women. Dove is considered one of the most fundamental African-American poets who devoted her writings to explore the problem of identity that confronted marginalized women in America. Besides tackling the issue of black women’s stereotypical images, this paper focuses upon the psychological damage which the black women had suffered from due to their stripped identity. In ‘Thomas and Beulah’, Dove reflects the black woman’s longing for her homeland in order to make up for her lost identity. This poem represents atavistic feelings deal with certain recurrent images, both aural and visual, like the image of Beulah who represents the African-American woman who searches for an identity, as she is being denied and humiliated one in the newly founded society. In an attempt to protest against the stereotypical mule image that had been imposed upon black women in America, Dove in ‘On the Bus with Rosa Parks’ tries to ignite the beaten spirits to struggle for their own rights by revitalizing the rebellious nature and strong determination of the historical figure ‘Rosa Parks’ that sparked the Civil Rights Movement. In ‘Daystar’, Dove proves that black women are subjected to double-edged oppression; firstly, in terms of race as a black woman in an unjust white society that violates her rights due to her black origins and secondly, in terms of gender as a member of the female sex that is meant to exist only to serve man’s needs. Similarly, in the ‘Adolescence’ series, Dove focuses on the double marginalization which the black women had experienced. It concludes that the marginalization of black women has resulted from the domination of the masculine world and the oppression of the white world. Moreover, Dove’s ‘Beauty and the Beast’ investigates the African-American women’s problem of estrangement and identity crisis in America. It also sheds light upon the psychological consequences that resulted from the violation of marginalized women’s identity. Furthermore, this poem shows the black women’s self-debasement, helplessness, and double consciousness that emanate from the sense of uprootedness. Finally, this paper finds out that the negative, debased and inferior stereotypical image held by the society did not only contribute to the marginalization of black women but also silenced and muted their voices.Keywords: stereotypical images, marginalized women, Rita Dove, identity
Procedia PDF Downloads 164343 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 54342 Heating Demand Reduction in Single Family Houses Community through Home Energy Management: Putting Users in Charge
Authors: Omar Shafqat, Jaime Arias, Cristian Bogdan, Björn Palm
Abstract:
Heating constitutes a major part of the overall energy consumption in Sweden. In 2013 heating and hot water accounted for about 55% of the total energy use in the housing sector. Historically, the end users have not been able to make a significant impact on their consumption on account of traditional control systems that do not facilitate interaction and control of the heating systems. However, in recent years internet connected home energy management systems have become increasingly available which allow users to visualize the indoor temperatures as well as control the heating system. However, the adoption of these systems is still in its nascent stages. This paper presents the outcome of a study carried out in a community of single-family houses in Stockholm. Heating in the area is provided through district heating, and the neighbourhood is connected through a local micro thermal grid, which is owned and operated by the local community. Heating in the houses is accomplished through a hydronic system equipped with radiators. The system installed offers the households to control the indoor temperature through a mobile application as well as through a physical thermostat. It was also possible to program the system to, for instance, lower the temperatures during night time and when the users were away. The users could also monitor the indoor temperatures through the application. It was additionally possible to create different zones in the house with their own individual programming. The historical heating data (in the form of billing data) was available for several previous years and has been used to perform quantitative analysis for the study after necessary normalization for weather variations. The experiment involved 30 households out of a community of 178 houses. The area was selected due to uniform construction profile in the area. It was observed that despite similar design and construction period there was a large variation in the heating energy consumption in the area which can for a large part be attributed to user behaviour. The paper also presents qualitative analysis done through survey questions as well as a focus group carried out with the participants. Overall, considerable energy savings were accomplished during the trial, however, there was a considerable variation between the participating households. The paper additionally presents recommendations to improve the impact of home energy management systems for heating in terms of improving user engagement and hence the energy impact.Keywords: energy efficiency in buildings, energy behavior, heating control system, home energy management system
Procedia PDF Downloads 173341 The Impact of the Variation of Sky View Factor on Landscape Degree of Enclosure of Urban Blue and Green Belt
Authors: Yi-Chun Huang, Kuan-Yun Chen, Chuang-Hung Lin
Abstract:
Urban Green Belt and Blue is a part of the city landscape, it is an important constituent element of the urban environment and appearance. The Hsinchu East Gate Moat is situated in the center of the city, which not only has a wealth of historical and cultural resources, but also combines the Green Belt and the Blue Belt qualities at the same time. The Moat runs more than a thousand meters through the vital Green Belt and the Blue Belt in downtown, and each section is presented in different qualities of moat from south to north. The water area and the green belt of surroundings are presented linear and banded spread. The water body and the rich diverse river banks form an urban green belt of rich layers. The watercourse with green belt design lets users have connections with blue belts in different ways; therefore, the integration of Hsinchu East Gate and moat have become one of the unique urban landscapes in Taiwan. The study is based on the fact-finding case of Hsinchu East Gate Moat where situated in northern Taiwan, to research the impact between the SVF variation of the city and spatial sequence of Urban Green Belt and Blue landscape and visual analysis by constituent cross-section, and then comparing the influence of different leaf area index – the variable ecological factors to the degree of enclosure. We proceed to survey the landscape design of open space, to measure existing structural features of the plant canopy which contain the height of plants and branches, the crown diameter, breast-height diameter through access to diagram of Geographic Information Systems (GIS) and on-the-spot actual measurement. The north and south districts of blue green belt areas are divided 20 meters into a unit from East Gate Roundabout as the epicenter, and to set up a survey points to measure the SVF above the survey points; then we proceed to quantitative analysis from the data to calculate open landscape degree of enclosure. The results can be reference for the composition of future river landscape and the practical operation for dynamic space planning of blue and green belt landscape.Keywords: sky view factor, degree of enclosure, spatial sequence, leaf area indices
Procedia PDF Downloads 556340 The Morphogenesis of an Informal Settlement: An Examination of Street Networks through the Informal Development Stages Framework
Authors: Judith Margaret Tymon
Abstract:
As cities struggle to incorporate informal settlements into the fabric of urban areas, the focus has often been on the provision of housing. This study explores the underlying structure of street networks, with the goal of understanding the morphogenesis of informal settlements through the lens of the access network. As the stages of development progress from infill to consolidation and eventually, to a planned in-situ settlement, the access networks retain the form of the core segments; however, a majority of street patterns are adapted to a grid design to support infrastructure in the final upgraded phase. A case study is presented to examine the street network in the informal settlement of Gobabis Namibia as it progresses from its initial stages to a planned, in-situ, and permanently upgraded development. The Informal Development Stages framework of foundation, infill, and consolidation, as developed by Dr. Jota Samper, is utilized to examine the evolution of street networks. Data is gathered from historical Google Earth satellite images for the time period between 2003 and 2022. The results demonstrate that during the foundation through infill stages, incremental changes follow similar patterns, with pathways extended, lengthened, and densified as housing is created and the settlement grows. In the final stage of consolidation, the resulting street layout is transformed to support the installation of infrastructure; however, some elements of the original street patterns remain. The core pathways remain intact to accommodate the installation of infrastructure and the creation of housing plots, defining the shape of the settlement and providing the basis of the urban form. The adaptations, growth, and consolidation of the street network are critical to the eventual formation of the spatial layout of the settlement. This study will include a comparative analysis of findings with those of recent research performed by Kamalipour, Dovey, and others regarding incremental urbanism within informal settlements. Further comparisons will also include studies of street networks of well-established urban centers that have shown links between the morphogenesis of access networks and the eventual spatial layout of the city. The findings of the study can be used to guide and inform strategies for in-situ upgrading and can contribute to the sustainable development of informal settlements.Keywords: Gobabis Namibia, incremental urbanism, informal development stages, informal settlements, street networks
Procedia PDF Downloads 64339 Ethiopia as a Tourist Destination: An Exploration of Italian Tourists’ Market Demand
Authors: Frezer Okubay Weldegebriel
Abstract:
The tourism sector in Ethiopia plays a significant role in the national economy. The government is granting its pledge and readiness to develop this sector through various initiatives since to eradicate poverty and encourage economic development of the country is one of the Millennium Development plans. The tourism sector has been identified as one of the priority economic sectors by many countries, and the Government of Ethiopia has planned to make Ethiopia among the top five African destinations by 2020. Nevertheless, the international tourism demand for Ethiopia currently lags behind other African countries such as South Africa, Egypt, Morocco, Tanzania, and Kenya. Meanwhile, the number of international tourists’ arrival in Ethiopia is recently increasing even if it cannot be competitive with other African countries. Therefore, to offer demand-driven tourism products, the Ethiopian government, Tourism planners, Tour & Travel operators need to understand the important factors, which affect international tourists’ decision to visit Ethiopian destinations. This study was intended to analyze Italian Tourists Demand towards Ethiopian destination. The researcher aimed to identify the demand for Italian tourists’ preference to Ethiopian destinations comparing to the top East African countries. This study uses both qualitative and quantitative research methodology, and the data is manipulating through primary data collection method using questionnaires, interviews, and secondary data by reviewing books, journals, magazines, past researches, and websites. An active and potential Italian tourist cohort, five well-functioning tour operators based in Ethiopia for Italian tourists and professionals from Ethiopian Ministry of Tourism and Culture participated. Based on the analysis of the data collected through the questionnaire, interviews, and reviews of different materials, the study disclosed that the majority of Italian tourists have a high demand on Ethiopian Tourist destination. Historical and cultural interest, safety and security, the hospitality of the people and affordable accommodation coast are the main reason for them. However, some Italian tourists prefer to visit Kenya, Tanzania, and Uganda due to the fact that they are fascinated by adventure, safari and beaches, while Ethiopia cannot provide these attractions. Most Italian tourists have little information and practical experiences on Ethiopian tourism possibilities via a tour and travel companies. Moreover, the insufficient marketing campaign and promotion by Ethiopian Government and Ministry of Tourism could also contribute to the failure of Ethiopian tourism.Keywords: The demand of Italian tourists, Ethiopia economy, Ethiopia tourism destination, promoting Ethiopia tourism
Procedia PDF Downloads 208338 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 38337 Celebrating Community Heritage through the People’s Collection Wales: A Case Study in the Development of Collecting Traditions and Engagement
Authors: Gruffydd E. Jones
Abstract:
The world’s largest collection of historical, cultural, and heritage material is unarchived and undocumented in the hands of the public. Not only does this material represent the missing collections in heritage sector archives today, but it is also the key to providing a diverse range of communities with the means to express their history in their own words and to celebrate their unique, personal heritage. The People’s Collection Wales (PCW) acts as a platform on which the heritage of Wales and her people can be collated and shared, at the heart of which is a thriving community engagement programme across a network of museums, archives, and libraries. By providing communities with the archival skillset commonly employed throughout the heritage sector, PCW enables local projects, societies, and individuals to express their understanding of local heritage with their own voices, empowering communities to embrace their diverse and complex identities around Wales. Drawing on key examples from the project’s history, this paper will demonstrate the successful way in which museums have been developed as hubs for community engagement where the public was at the heart of collection and documentation activities, informing collection and curatorial policies to benefit both the institute and its local community. This paper will also highlight how collections from marginalised, under-represented, and minority communities have been published and celebrated extensively around Wales, including adoption by the education system in classrooms today. Any activity within the heritage sector, whether of collection, preservation, digitisation, or accessibility, should be considerate of community engagement opportunities not only to remain relevant but in order to develop as community hubs, pivots around which local heritage is supported and preserved. Attention will be drawn to our digitisation workflow, which, through training and support from museums and libraries, has allowed the public not only to become involved but to actively lead the contemporary evolution of documentation strategies in Wales. This paper will demonstrate how the PCW online access archive is promoting museum collections, encouraging user interaction, and providing an invaluable platform on which a broader community can inform, preserve and celebrate their cultural heritage through their own archival material too. The continuing evolution of heritage engagement depends wholly on placing communities at the heart of the sector, recognising their wealth of cultural knowledge, and developing the archival skillset necessary for them to become archival practitioners of their own.Keywords: social history, cultural heritage, community heritage, museums, archives, libraries, community engagement, oral history, community archives
Procedia PDF Downloads 94336 A Sociolinguistic Study of the Outcomes of Arabic-French Contact in the Algerian Dialect Tlemcen Speech Community as a Case Study
Authors: R. Rahmoun-Mrabet
Abstract:
It is acknowledged that our style of speaking changes according to a wide range of variables such as gender, setting, the age of both the addresser and the addressee, the conversation topic, and the aim of the interaction. These differences in style are noticeable in monolingual and multilingual speech communities. Yet, they are more observable in speech communities where two or more codes coexist. The linguistic situation in Algeria reflects a state of bilingualism because of the coexistence of Arabic and French. Nevertheless, like all Arab countries, it is characterized by diglossia i.e. the concomitance of Modern Standard Arabic (MSA) and Algerian Arabic (AA), the former standing for the ‘high variety’ and the latter for the ‘low variety’. The two varieties are derived from the same source but are used to fulfil distinct functions that is, MSA is used in the domains of religion, literature, education and formal settings. AA, on the other hand, is used in informal settings, in everyday speech. French has strongly affected the Algerian language and culture because of the historical background of Algeria, thus, what can easily be noticed in Algeria is that everyday speech is characterized by code-switching from dialectal Arabic and French or by the use of borrowings. Tamazight is also very present in many regions of Algeria and is the mother tongue of many Algerians. Yet, it is not used in the west of Algeria, where the study has been conducted. The present work, which was directed in the speech community of Tlemcen-Algeria, aims at depicting some of the outcomes of the contact of Arabic with French such as code-switching, borrowing and interference. The question that has been asked is whether Algerians are aware of their use of borrowings or not. Three steps are followed in this research; the first one is to depict the sociolinguistic situation in Algeria and to describe the linguistic characteristics of the dialect of Tlemcen, which are specific to this city. The second one is concerned with data collection. Data have been collected from 57 informants who were given questionnaires and who have then been classified according to their age, gender and level of education. Information has also been collected through observation, and note taking. The third step is devoted to analysis. The results obtained reveal that most Algerians are aware of their use of borrowings. The present work clarifies how words are borrowed from French, and then adapted to Arabic. It also illustrates the way in which singular words inflect into plural. The results expose the main characteristics of borrowing as opposed to code-switching. The study also clarifies how interference occurs at the level of nouns, verbs and adjectives.Keywords: bilingualism, borrowing, code-switching, interference, language contact
Procedia PDF Downloads 276335 Estimating the Relationship between Education and Political Polarization over Immigration across Europe
Authors: Ben Tappin, Ryan McKay
Abstract:
The political left and right appear to disagree not only over questions of value but, also, over questions of fact—over what is true “out there” in society and the world. Alarmingly, a large body of survey data collected during the past decade suggests that this disagreement tends to be greatest among the most educated and most cognitively sophisticated opposing partisans. In other words, the data show that these individuals display the widest political polarization in their reported factual beliefs. Explanations of this polarization pattern draw heavily on cultural and political factors; yet, the large majority of the evidence originates from one cultural and political context—the United States, a country with a rather unique cultural and political history. One consequence is that widening political polarization conditional on education and cognitive sophistication may be due to idiosyncratic cultural, political or historical factors endogenous to US society—rather than a more general, international phenomenon. We examined widening political polarization conditional on education across Europe, over a topic that is culturally and politically contested; immigration. To do so, we analyzed data from the European Social Survey, a premier survey of countries in and around the European area conducted biennially since 2002. Our main results are threefold. First, we see widening political polarization conditional on education over beliefs about the economic impact of immigration. The foremost countries showing this pattern are the most influential in Europe: Germany and France. However, we also see heterogeneity across countries, with some—such as Belgium—showing no evidence of such polarization. Second, we find that widening political polarization conditional on education is a product of sorting. That is, highly educated partisans exhibit stronger within-group consensus in their beliefs about immigration—the data do not support the view that the more educated partisans are more polarized simply because the less educated fail to adopt a position on the question. Third, and finally, we find some evidence that shocks to the political climate of countries in the European area—for example, the “refugee crisis” of summer 2015—were associated with a subsequent increase in political polarization over immigration conditional on education. The largest increase was observed in Germany, which was at the centre of the so-called refugee crisis in 2015. These results reveal numerous insights: they show that widening political polarization conditional on education is not restricted to the US or native English-speaking culture; that such polarization emerges in the domain of immigration; that it is a product of within-group consensus among the more educated; and, finally, that exogenous shocks to the political climate may be associated with subsequent increases in political polarization conditional on education.Keywords: beliefs, Europe, immigration, political polarization
Procedia PDF Downloads 147334 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium
Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas
Abstract:
Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides
Procedia PDF Downloads 435333 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 285332 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach
Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi
Abstract:
Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,
Procedia PDF Downloads 267331 Personalized Infectious Disease Risk Prediction System: A Knowledge Model
Authors: Retno A. Vinarti, Lucy M. Hederman
Abstract:
This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk
Procedia PDF Downloads 242330 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays
Authors: Min Han, Di Wu, Lin Yuan, Fei Liu
Abstract:
Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance
Procedia PDF Downloads 274329 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂
Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang
Abstract:
CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces
Procedia PDF Downloads 278328 Modelling Flood Events in Botswana (Palapye) for Protecting Roads Structure against Floods
Authors: Thabo M. Bafitlhile, Adewole Oladele
Abstract:
Botswana has been affected by floods since long ago and is still experiencing this tragic event. Flooding occurs mostly in the North-West, North-East, and parts of Central district due to heavy rainfalls experienced in these areas. The torrential rains destroyed homes, roads, flooded dams, fields and destroyed livestock and livelihoods. Palapye is one area in the central district that has been experiencing floods ever since 1995 when its greatest flood on record occurred. Heavy storms result in floods and inundation; this has been exacerbated by poor and absence of drainage structures. Since floods are a part of nature, they have existed and will to continue to exist, hence more destruction. Furthermore floods and highway plays major role in erosion and destruction of roads structures. Already today, many culverts, trenches, and other drainage facilities lack the capacity to deal with current frequency for extreme flows. Future changes in the pattern of hydro climatic events will have implications for the design and maintenance costs of roads. Increase in rainfall and severe weather events can affect the demand for emergent responses. Therefore flood forecasting and warning is a prerequisite for successful mitigation of flood damage. In flood prone areas like Palapye, preventive measures should be taken to reduce possible adverse effects of floods on the environment including road structures. Therefore this paper attempts to estimate return periods associated with huge storms of different magnitude from recorded historical rainfall depth using statistical method. The method of annual maxima was used to select data sets for the rainfall analysis. In the statistical method, the Type 1 extreme value (Gumbel), Log Normal, Log Pearson 3 distributions were all applied to the annual maximum series for Palapye area to produce IDF curves. The Kolmogorov-Smirnov test and Chi Squared were used to confirm the appropriateness of fitted distributions for the location and the data do fit the distributions used to predict expected frequencies. This will be a beneficial tool for urgent flood forecasting and water resource administration as proper drainage design will be design based on the estimated flood events and will help to reclaim and protect the road structures from adverse impacts of flood.Keywords: drainage, estimate, evaluation, floods, flood forecasting
Procedia PDF Downloads 371327 Long-Term Exposure, Health Risk, and Loss of Quality-Adjusted Life Expectancy Assessments for Vinyl Chloride Monomer Workers
Authors: Tzu-Ting Hu, Jung-Der Wang, Ming-Yeng Lin, Jin-Luh Chen, Perng-Jy Tsai
Abstract:
The vinyl chloride monomer (VCM) has been classified as group 1 (human) carcinogen by the IARC. Workers exposed to VCM are known associated with the development of the liver cancer and hence might cause economical and health losses. Particularly, for those work for the petrochemical industry have been seriously concerned in the environmental and occupational health field. Considering assessing workers’ health risks and their resultant economical and health losses requires the establishment of long-term VCM exposure data for any similar exposure group (SEG) of interest, the development of suitable technologies has become an urgent and important issue. In the present study, VCM exposures for petrochemical industry workers were determined firstly based on the database of the 'Workplace Environmental Monitoring Information Systems (WEMIS)' provided by Taiwan OSHA. Considering the existence of miss data, the reconstruction of historical exposure techniques were then used for completing the long-term exposure data for SEGs with routine operations. For SEGs with non-routine operations, exposure modeling techniques, together with their time/activity records, were adopted for determining their long-term exposure concentrations. The Bayesian decision analysis (BDA) was adopted for conducting exposure and health risk assessments for any given SEG in the petrochemical industry. The resultant excessive cancer risk was then used to determine the corresponding loss of quality-adjusted life expectancy (QALE). Results show that low average concentrations can be found for SEGs with routine operations (e.g., VCM rectification 0.0973 ppm, polymerization 0.306 ppm, reaction tank 0.33 ppm, VCM recovery 1.4 ppm, control room 0.14 ppm, VCM storage tanks 0.095 ppm and wastewater treatment 0.390 ppm), and the above values were much lower than that of the permissible exposure limit (PEL; 3 ppm) of VCM promulgated in Taiwan. For non-routine workers, though their high exposure concentrations, their low exposure time and frequencies result in low corresponding health risks. Through the consideration of exposure assessment results, health risk assessment results, and QALE results simultaneously, it is concluded that the proposed method was useful for prioritizing SEGs for conducting exposure abatement measurements. Particularly, the obtained QALE results further indicate the importance of reducing workers’ VCM exposures, though their exposures were low as in comparison with the PEL and the acceptable health risk.Keywords: exposure assessment, health risk assessment, petrochemical industry, quality-adjusted life years, vinyl chloride monomer
Procedia PDF Downloads 195326 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 62325 Teaching the Meaning of the Holy Quran Using Modern Technology
Authors: Arjumand Warsy
Abstract:
Among the Muslims, the Holy Quran is taught from early childhood and generally by the age of 7-8 years the reading of the entire Quran is completed by most of the children in Muslim families. During this period excellent reciter’s are selected to teach and emphasis is laid on correct reading, pronunciation and memorization. Following these years, the parents lay emphasis on the recitation of the Quran on daily basis. During the month of Ramadan the entire Quran is read one or more times and there are considerable number of Muslims who complete the entire Quran once or more each calendar month. Many Muslims do not know Arabic and for them message in the Quran is what others tell them and often they have no idea about this Guidance sent to them. This deficiency is reflected in many ways, both among people living in Muslim or non-Muslim countries. Due to the deficiency in knowledge about Islamic teachings, the foundations of Islam are being eroded by a variety of forces. In an attempt to guard against the non-Islamic influences, every Muslim must have a clear understanding of the Islamic teachings and requirements. The best guidance can be provided by the understanding of the Holy Quran. However, we are faced with the problem that often the Quran is taught in a way that fails to develop an interest and understanding of the message from Allah. Looking at the teaching of other subjects both scientific and non-scientific, at school, college or University levels, it is obvious that the advances in teaching methodologies using electronic technology have had a major impact, where both the understanding and the interest of the students are significantly elevated. We attempted to teach the meaning of the Holy Quran to children and adults using a scientific and modern approach using slide presentation and animations. The results showed almost 100% increase in the understanding of the Quran message; all attendees claimed they developed an increased interest in the study of the Holy Quran and did not lose track or develop boredom throughout the lectures. They learnt the information and remembered it more effectively. The love for Allah and Prophet Mohammad (PBUH) increased significantly. The fear of Allah and love of Heaven developed significantly. Historical facts and the stories of the past nations became clearer and the Greatness of the Creator was strongly felt. Several of attendees wanted to become better Muslims and to spread the knowledge of Islam. In this presentation, the adopted teaching method will be first presented and demonstrated to the audience using a short Surah from the Quran, followed by discussion on the results achieved during our study. We will endeavor to convey to the audience that there is a need to adopt a more scientific approach to teach the Quran so that a greater benefit is achieved by all.Keywords: The Holy Quran, Muslims, presentations, technology
Procedia PDF Downloads 427324 Anti-Gravity to Neo-Concretism: The Epodic Spaces of Non-Objective Art
Authors: Alexandra Kennedy
Abstract:
Making use of the notion of ‘epodic spaces’ this paper presents a reconsideration of non-objective art practices, proposing alternatives to established materialist, formalist, process-based conceptualist approaches to such work. In his Neo-Concrete Manifesto (1959) Ferreira Gullar (1930-2016) sought to create a distinction between various forms of non-objective art. He distinguished the ‘geometric’ arts of neoplasticism, constructivism, and suprematism – which he described as ‘dangerously acute rationalism’ – from other non-objective practices. These alternatives, he proposed, have an expressive potential lacking in the former and this formed the basis for their categorisation as neo-concrete. Gullar prioritized the phenomenological over the rational, with an emphasis on the role of the spectator (a key concept of minimalism). Gullar highlighted the central role of sensual experience, colour and the poetic in such work. In the early twentieth century, Russian Cosmism – an esoteric philosophical movement – was highly influential on Russian avant-garde artists and can account for suprematist artists’ interest in, and approach to, planar geometry and four-dimensional space as demonstrated in the abstract paintings of Kasimir Malevich (1879-1935). Nikolai Fyodorov (1823-1903) promoted the idea of anti-gravity and cosmic space as the field for artistic activity. The artist and writer Kuzma Petrov-Vodkin (1878-1939) wrote on the concept of Euclidean space, the overcoming of such rational conceptions of space and the breaking free from the gravitational field and the earth’s sphere. These imaginary spaces, which also invoke a bodily experience, present a poetic dimension to the work of the suprematists. It is a dimension that arguably aligns more with Gullar’s formulation of his neo-concrete rather than that of his alignment of Suprematism with rationalism. While found in experiments with planar geometry, the interest in forms suggestive of an experience of breaking free–both physically from the earth and conceptually from rational, mathematical space (in a pre-occupation with non-Euclidean space and anti-geometry) and in their engagement with the spatial properties of colour, Suprematism presents itself as imaginatively epodic. The paper discusses both historical and contemporary non-objective practices in this context, drawing attention to the manner in which the category of the non-objective is used to categorise art works which are, arguably, qualitatively different.Keywords: anti-gravity, neo-concrete, non-Euclidian geometry, non-objective painting
Procedia PDF Downloads 177323 Students with Severe Learning Disabilities in Mainstream Classes: A Study of Comprehensions amongst School Staff and Parents Built on Observations and Interviews in a Phenomenological Framework
Authors: Inger Eriksson, Lisbeth Ohlsson, Jeremias Rosenqvist
Abstract:
Ingress: Focus in the study is directed towards phenomena and concepts of segregation, integration, and inclusion of students attending a special school form in Sweden, namely compulsory school for pupils with learning disabilities (in Swedish 'särskola') as an alternative to mainstream compulsory school. Aim: The aim of the study is to examine the school situation for students attending särskola from a historical perspective focussing the 1980s, 1990s and the 21st century, from an integration perspective, and from a perspective of power. Procedure: Five sub-studies are reported, where integration and inclusion are looked into by observation studies and interviews with school leaders, teachers, special and remedial teachers, psychologists, coordinators, and parents in the special schools/särskola. In brief, the study about special school students attending mainstream classes from 1998 takes its point of departure in the idea that all knowledge development takes place in a social context. A special interest is taken in the school’s role for integration generally, and the role of special education particularly and on whose conditions the integration is taking place – the special school students' or the other students,' or may be equally, in the class. Pedagogical and social conditions for so called individually integrated special school students in elementary school classes were studied in eleven classes. Results: The findings are interpreted in a power perspective supported by Foucault and relationally by Vygotsky. The main part of the data consists of extensive descriptions of the eleven cases, here called integration situations. Conclusions: In summary, this study suggests that the possibilities for a special school student to get into the class community and fellowship and thereby be integrated with the class are to a high degree dependant on to what extent the student can take part in the pedagogical processes. The pedagogical situation for the special school student is affected not only by the class teacher and the support and measures undertaken but also by the other students in the class as they, in turn, are affected by how the special school student is acting. This mutual impact, which constitutes the integration process in itself, might result in a true integration if the special school student attains the status of being accepted on his/her own terms not only being cared for or cherished by some classmates. A special school student who is not accepted even on the terms of the class will often experience severe problems in the contacts with classmates and the school situation might thus be a mere placement.Keywords: integration/inclusion, mainstream school, power, special school students
Procedia PDF Downloads 248322 Phenomenology of Child Labour in Estates, Farms and Plantations in Zimbabwe: A Comparative Analysis of Tanganda and Eastern Highlands Tea Estates
Authors: Chupicai Manuel
Abstract:
The global efforts to end child labour have been increasingly challenged by adages of global capitalism, inequalities and poverty affecting the global south. In the face the of rising inequalities whose origin can be explained from historical and political economy analysis between the poor and the rich countries, child labour is also on the rise particularly on the global south. The socio-economic and political context of Zimbabwe has undergone serious transition from colonial times through the post-independence normally referred to as the transition period up to the present day. These transitions have aided companies and entities in the business and agriculture sector to exploit child labour while country provided conditions that enhance child labour due to vulnerability of children and anomic child welfare system that plagued the country. Children from marginalised communities dominated by plantations and farms are affected most. This paper explores the experiences and perceptions of children working in tea estates, plantations and farms, and the adults who formerly worked in these plantations during their childhood to share their experiences and perceptions on child labour in Zimbabwe. Childhood theories that view children as apprentices and a human rights perspectives were employed to interrogate the concept of childhood, child labour and poverty alleviation strategies. Phenomenological research design was adopted to describe the experiences of children working in plantations and interpret the meanings they have on their work and livelihoods. The paper drew form 30 children from two plantations through semi-structured interviews and 15 key informant interviews from civil society organisations, international labour organisation, adults who formerly worked in the plantations and the personnel of the plantations. The findings of the study revealed that children work on the farms as an alternative model for survival against economic challenges while the majority cited that poverty compel them to work and get their fees and food paid for. Civil society organisations were of the view that child rights are violated and the welfare system of the country is malfunctional. The perceptions of the majority of the children interviewed are that the system on the plantations is better and this confirmed the socio-constructivist theory that views children as apprentices. The study recommended child sensitive policies and welfare regime that protects children from exploitation together with policing and legal measures that secure child rights.Keywords: child labour, child rights, phenomenology, poverty reduction
Procedia PDF Downloads 256321 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia
Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu
Abstract:
Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models
Procedia PDF Downloads 196320 The Connection between De Minimis Rule and the Effect on Trade
Authors: Pedro Mario Gonzalez Jimenez
Abstract:
The novelties introduced by the last Notice on agreements of minor importance tighten the application of the ‘De minimis’ safe harbour in the European Union. However, the undetermined legal concept of effect on trade between the Member States becomes importance at the same time. Therefore, the current analysis that the jurist should carry out in the European Union to determine if an agreement appreciably restrict competition under Article 101 of the Treaty on the Functioning of the European Union is double. Hence, it is necessary to know how to balance the significance in competition and the significance in effect on trade between the Member States. It is a crucial issue due to the negative delimitation of restriction of competition affects the positive one. The methodology of this research is rather simple. Beginning with a historical approach to the ‘De Minimis Rule’, their main problems and uncertainties will be found. So, after the analysis of normative documents and the jurisprudence of the Court of Justice of the European Union some proposals of ‘Lege ferenda’ will be offered. These proposals try to overcome the contradictions and questions that currently exist in the European Union as a consequence of the current legal regime of agreements of minor importance. The main findings of this research are the followings: Firstly, the effect on trade is another way to analyze the importance of an agreement different from the ‘De minimis rule’. In point of fact, this concept is singularly adapted to go through agreements that have as object the prevention, restriction or distortion of competition, as it is observed in the most famous European Union case-law. Thanks to the effect on trade, as long as the proper requirements are met there is no a restriction of competition under article 101 of the Treaty on the Functioning of the European Union, even if the agreement had an anti-competitive object. These requirements are an aggregate market share lower than 5% on any of the relevant markets affected by the agreement and turnover lower than 40 million of Euros. Secondly, as the Notice itself says ‘it is also intended to give guidance to the courts and competition authorities of the Member States in their application of Article 101 of the Treaty, but it has no binding force for them’. This reality makes possible the existence of different statements among the different Member States and a confusing perception of what a restriction of competition is. Ultimately, damage on trade between the Member States could be observed for this reason. The main conclusion is that the significant effect on trade between Member States is irrelevant in agreements that restrict competition because of their effects but crucial in agreements that restrict competition because of their object. Thus, the Member States should propose the incorporation of a similar concept in their legal orders in order to apply the content of the Notice. Otherwise, the significance of the restrictive agreement on competition would not be properly assessed.Keywords: De minimis rule, effect on trade, minor importance agreements, safe harbour
Procedia PDF Downloads 180319 Use of Low-Cost Hydrated Hydrogen Sulphate-Based Protic Ionic Liquids for Extraction of Cellulose-Rich Materials from Common Wheat (Triticum Aestivum) Straw
Authors: Chris Miskelly, Eoin Cunningham, Beatrice Smyth, John. D. Holbrey, Gosia Swadzba-Kwasny, Emily L. Byrne, Yoan Delavoux, Mantian Li.
Abstract:
Recently, the use of ionic liquids (ILs) for the preparation of lignocellulose derived cellulosic materials as alternatives to petrochemical feedstocks has been the focus of considerable research interest. While the technical viability of IL-based lignocellulose treatment methodologies has been well established, the high cost of reagents inhibits commercial feasibility. This work aimed to assess the technoeconomic viability of the preparation of cellulose rich materials (CRMs) using protic ionic liquids (PILs) synthesized from low cost alkylamines and sulphuric acid. For this purpose, the tertiary alkylamines, triethylamine, and dimethylbutylamine were selected. Bulk scale production cost of the synthesized PILs, triethylammonium hydrogen sulphate and dimetheylbutylammonium hydrogen sulphate, was reported as $0.78 kg-1 to $1.24 kg-1. CRMs were prepared through the treatment of common wheat (Triticum aestivum) straw with these PILs. By controlling treatment parameters, CRMs with a cellulose content of ≥ 80 wt% were prepared. This was achieved using a T. aestivum straw to PIL loading ratio of 1:15 w/w, a treatment duration of 180 minutes, and ethanol as a cellulose antisolvent. Infrared spectra data and decreased onset degradation temperature of CRMs (ΔTONSET ~ 70 °C) suggested the formation of cellulose sulphate esters during treatment. Chemical derivatisation can aid the dispersion of prepared CRMs in non-polar polymer/ composite matrices, but act as a barrier to thermal processing at temperatures above 150 °C. It was also shown that treatment increased the crystallinity of CRMs (ΔCrI ~ 40 %) without altering the native crystalline structure or crystallite size (~ 2.6 nm) of cellulose; peaks associated with the cellulose I crystalline planes (110), (200), and (004) were observed at Bragg angles 16.0 °, 22.5 ° and 35.0 ° respectively. This highlighted the inability of assessed PILs to dissolve crystalline cellulose and was attributed to the high acidity (pKa ~ - 1.92 to - 6.42) of sulphuric acid derived anions. Electron micrographs revealed that the stratified multilayer tissue structure of untreated T. aestivum straw was significantly modified during treatment. T. aestivum straw particles were disassembled during treatment, with prepared CRMs adopting a golden-brown film-like appearance. This work demonstrated the degradation of non-cellulosic fractions of lignocellulose without dissolution of cellulose. It is the first to report on the derivatisation of cellulose during treatment with protic hydrogen sulphate ionic liquids, and the potential implications of this with reference to biopolymer feedstock preparation.Keywords: cellulose, extraction, protic ionic liquids, esterification, thermal stability, waste valorisation, biopolymer feedstock
Procedia PDF Downloads 36318 Downtime Estimation of Building Structures Using Fuzzy Logic
Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam
Abstract:
Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment
Procedia PDF Downloads 160