Search results for: adopt heritage scheme
128 Prospective Museum Visitor Management Based on Prospect Theory: A Pragmatic Approach
Authors: Athina Thanou, Eirini Eleni Tsiropoulou, Symeon Papavassiliou
Abstract:
The problem of museum visitor experience and congestion management – in various forms - has come increasingly under the spotlight over the last few years, since overcrowding can significantly decrease the quality of visitors’ experience. Evidence suggests that on busy days the amount of time a visitor spends inside a crowded house museum can fall by up to 60% compared to a quiet mid-week day. In this paper we consider the aforementioned problem, by treating museums as evolving social systems that induce constraints. However, in a cultural heritage space, as opposed to the majority of social environments, the momentum of the experience is primarily controlled by the visitor himself. Visitors typically behave selfishly regarding the maximization of their own Quality of Experience (QoE) - commonly expressed through a utility function that takes several parameters into consideration, with crowd density and waiting/visiting time being among the key ones. In such a setting, congestion occurs when either the utility of one visitor decreases due to the behavior of other persons, or when costs of undertaking an activity rise due to the presence of other persons. We initially investigate how visitors’ behavioral risk attitudes, as captured and represented by prospect theory, affect their decisions in resource sharing settings, where visitors’ decisions and experiences are strongly interdependent. Different from the majority of existing studies and literature, we highlight that visitors are not risk neutral utility maximizers, but they demonstrate risk-aware behavior according to their personal risk characteristics. In our work, exhibits are organized into two groups: a) “safe exhibits” that correspond to less congested ones, where the visitors receive guaranteed satisfaction in accordance with the visiting time invested, and b) common pool of resources (CPR) exhibits, which are the most popular exhibits with possibly increased congestion and uncertain outcome in terms of visitor satisfaction. A key difference is that the visitor satisfaction due to CPR strongly depends not only on the invested time decision of a specific visitor, but also on that of the rest of the visitors. In the latter case, the over-investment in time, or equivalently the increased congestion potentially leads to “exhibit failure”, interpreted as the visitors gain no satisfaction from their observation of this exhibit due to high congestion. We present a framework where each visitor in a distributed manner determines his time investment in safe or CPR exhibits to optimize his QoE. Based on this framework, we analyze and evaluate how visitors, acting as prospect-theoretic decision-makers, respond and react to the various pricing policies imposed by the museum curators. Based on detailed evaluation results and experiments, we present interesting observations, regarding the impact of several parameters and characteristics such as visitor heterogeneity and use of alternative pricing policies, on scalability, user satisfaction, museum capacity, resource fragility, and operation point stability. Furthermore, we study and present the effectiveness of alternative pricing mechanisms, when used as implicit tools, to deal with the congestion management problem in the museums, and potentially decrease the exhibit failure probability (fragility), while considering the visitor risk preferences.Keywords: museum resource and visitor management, congestion management, propsect theory, cyber physical social systems
Procedia PDF Downloads 184127 Palynological Investigation and Quality Determination of Honeys from Some Apiaries in Northern Nigeria
Authors: Alebiosu Olugbenga Shadrak, Victor Victoria
Abstract:
Honey bees exhibit preferences in their foraging behaviour on pollen and nectar for food and honey production, respectively. Melissopalynology is the study of pollen in honey and other honey products. Several work have been conducted on the palynological studies of honeys from the southern parts of Nigeria but with relatively scant records from the Northern region of the country. This present study aimed at revealing the favourably visited plants by honey bees, Apis melifera var. adansonii, at some apiaries in Northern Nigeria, as well as determining the quality of honeys produced. Honeys were harvested and collected from four apiaries of the region, namely: Sarkin Dawa missionary bee farm, Taraba State; Eleeshuwa Bee Farm, Keffi, Nassarawa State, Bulus Beekeeper Apiaries, Kagarko, Kaduna State and Mai Gwava Bee Farm, Kano State. These honeys were acetolysed for palynological microscopic analysis and subjected to standard treatment methods for the determination of their proximate composition and sugar profiling. Fresh anthers of two dominantly represented plants in the honeys were then collected for the quantification of their pollen protein contents, using the micro-kjeldhal procedure. A total of 30 pollen types were identified in the four honeys, and some of them were common to the honeys. A classification method for expressing pollen frequency class was employed: Senna cf. siamea, Terminalia cf. catappa, Mangifera indica, Parinari curatelifolia, Vitellaria paradoxa, Elaeis guineensis, Parkia biglobosa, Phyllantus muellerianus and Berlina Grandiflora, as “Frequent” (16-45%); while the others are either Rare (3-15%) or Sporadic (less than 3 %). Pollen protein levels of the two abundantly represented plants, Senna siamea (15.90mg/ml) and Terminalia catappa (17.33mg/ml) were found to be considerably lower. The biochemical analyses revealed varying amounts of proximate composition, non-reducing sugar and total sugar levels in the honeys. The results of this study indicate that pollen and nectar of the “Frequent” plants were preferentially foraged by honeybees in the apiaries. The estimated pollen protein contents of Senna same and Terminalia catappa were considerably lower and not likely to have influenced their favourable visitation by honeybees. However, a relatively higher representation of Senna cf. siamea in the pollen spectrum might have resulted from its characteristic brightly coloured and well scented flowers, aiding greater entomophily. Terminalia catappa, Mangifera indica, Elaeis guineensis, Vitellaria paradoxa, and Parkia biglobosa are typical food crops; hence they probably attracted the honeybees owing to the rich nutritional values of their fruits and seeds. Another possible reason for a greater entomophily of the favourably visited plants are certain nutritional constituents of their pollen and nectar, which were not investigated in this study. The nutritional composition of the honeys was observed to fall within the safe limits of international norms, as prescribed by Codex Alimentarius Commission, thus they are good honeys for human consumption. It is therefore imperative to adopt strategic conservation steps in ensuring that these favourably visited plants are protected from indiscriminate anthropogenic activities and also encourage apiarists in the country to establish their bee farms more proximally to the plants for optimal honey yield.Keywords: honeybees, melissopalynology, preferentially foraged, nutritional, bee farms, proximally
Procedia PDF Downloads 278126 Examining the European Central Bank's Marginal Attention to Human Rights Concerns during the Eurozone Crisis through the Lens of Organizational Culture
Authors: Hila Levi
Abstract:
Respect for human rights is a fundamental element of the European Union's (EU) identity and law. Surprisingly, however, the protection of human rights has been significantly restricted in the austerity programs ordered by the International Monetary Fund (IMF), the European Central Bank (ECB) and the European Commission (EC) (often labeled 'the Troika') in return for financial aid to the crisis-hit countries. This paper focuses on the role of the ECB in the crisis management. While other international financial institutions, such as the IMF or the World Bank, may opt to neglect human rights obligations, one might expect a greater respect of human rights from the ECB, which is bound by the EU Charter of Fundamental Rights. However, this paper argues that ECB officials made no significant effort to protect human rights or strike an adequate balance between competing financial and human rights needs while coping with the crisis. ECB officials were preoccupied with the need to stabilize the economy and prevent a collapse of the Eurozone, and paid only marginal attention to human rights concerns in the design and implementation of Troikas' programs. This paper explores the role of Organizational Culture (OC) in explaining this marginalization. While International Relations (IR) research on Intergovernmental Organizations (IGOs) behavior has traditionally focused on external interests of powerful member states, and on national and economic considerations, this study focuses on particular institutions' internal factors and independent processes. OC characteristics have been identified in OC literature as an important determinant of organizational behavior. This paper suggests that cultural characteristics are also vital for the examination of IGOs, and particularly for understanding the ECB's behavior during the crisis. In order to assess the OC of the ECB and the impact it had on its policies and decisions during the Eurozone crisis, the paper uses the results of numerous qualitative interviews conducted with high-ranking officials and staff members of the ECB involved in the crisis management. It further reviews primary sources of the ECB (such as ECB statutes, and the Memoranda of Understanding signed between the crisis countries and the Troika), and secondary sources (such as the report of the UN High Commissioner for Human Rights on Austerity measures and economic, social, and cultural rights). It thus analyzes the interaction between the ECBs culture and the almost complete absence of human rights considerations in the Eurozone crisis resolution scheme. This paper highlights the importance and influence of internal ideational factors on IGOs behavior. From a more practical perspective, this paper may contribute to understanding one of the obstacles in the process of human rights implementation in international organizations, and provide instruments for better protection of social and economic rights.Keywords: European central bank, eurozone crisis, intergovernmental organizations, organizational culture
Procedia PDF Downloads 155125 Bilingual Books in British Sign Language and English: The Development of E-Book
Authors: Katherine O'Grady-Bray
Abstract:
For some deaf children, reading books can be a challenge. Frank Barnes School (FBS) provides guided reading time with Teachers of the Deaf, in which they read books with deaf children using a bilingual approach. The vocabulary and context of the story is explained to deaf children in BSL so they develop skills bridging English and BSL languages. However, the success of this practice is only achieved if the person is fluent in both languages. FBS piloted a scheme to convert an Oxford Reading Tree (ORT) book into an e-book that can be read using tablets. Deaf readers at FBS have access to both languages (BSL and English) during lessons and outside the classroom. The pupils receive guided reading sessions with a Teacher of the Deaf every morning, these one to one sessions give pupils the opportunity to learn how to bridge both languages e.g. how to translate English to BSL and vice versa. Generally, due to our pupils’ lack of access to incidental learning, gaining new information about the world around them is limited. This highlights the importance of quality time to scaffold their language development. In some cases, there is a shortfall of parental support at home due to poor communication skills or an unawareness of how to interact with deaf children. Some families have a limited knowledge of sign language or simply don’t have the required learning environment and strategies needed for language development with deaf children. As the majority of our pupils’ preferred language is BSL we use that to teach reading and writing English. If this is not mirrored at home, there is limited opportunity for joint reading sessions. Development of the e-Book required planning and technical development. The overall production took time as video footage needed to be shot and then edited individually for each page. There were various technical considerations such as having an appropriate background colour so not to draw attention away from the signer. Appointing a signer with the required high level of BSL was essential. The language and pace of the sign language was an important consideration as it was required to match the age and reading level of the book. When translating English text to BSL, careful consideration was given to the nonlinear nature of BSL and the differences in language structure and syntax. The e-book was produced using Apple’s ‘iBook Author’ software which allowed video footage of the signer to be embedded on pages opposite the text and illustration. This enabled BSL translation of the content of the text and inferences of the story. An interpreter was used to directly ‘voice over’ the signer rather than the actual text. The aim behind the structure and layout of the e-book is to allow parents to ‘read’ with their deaf child which helps to develop both languages. From observations, the use of e-books has given pupils confidence and motivation with their reading, developing skills bridging both BSL and English languages and more effective reading time with parents.Keywords: bilingual book, e-book, BSL and English, bilingual e-book
Procedia PDF Downloads 169124 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa
Procedia PDF Downloads 161123 Glocalization of Journalism and Mass Communication Education: Best Practices from an International Collaboration on Curriculum Development
Authors: Bellarmine Ezumah, Michael Mawa
Abstract:
Glocalization is often defined as the practice of conducting business according to both local and global considerations – this epitomizes the curriculum co-development collaboration between a journalism and mass communications professor from a university in the United States and the Uganda Martyrs University in Uganda where a brand new journalism and mass communications program was recently co-developed. This paper presents the experiences and research result of this initiative which was funded through the Institute of International Education (IIE) under the umbrella of the Carnegie African Diaspora Fellowship Program (CADFP). Vital international and national concerns were addressed. On a global level, scholars have questioned and criticized the general Western-module ingrained in journalism and mass communication curriculum and proposed a decolonization of journalism curricula. Another major criticism is the concept of western-based educators transplanting their curriculum verbatim to other regions of the world without paying greater attention to the local needs. To address these two global concerns, an extensive assessment of local needs was conducted prior to the conceptualization of the new program. The assessment of needs adopted a participatory action model and captured the knowledge and narratives of both internal and external stakeholders. This involved review of pertinent documents including the nation’s constitution, governmental briefs, and promulgations, interviews with governmental officials, media and journalism educators, media practitioners, students, and benchmarking the curriculum of other tertiary institutions in the nation. Information gathered through this process served as blueprint and frame of reference for all design decisions. In the area of local needs, four key factors were addressed. First, the realization that most media personnel in Uganda are both academically and professionally unqualified. Second, the practitioners with academic training were found lacking in experience. Third, the current curricula offered at several tertiary institutions are not comprehensive and lack local relevance. The project addressed these problems thus: first, the program was designed to cater to both traditional and non-traditional students offering opportunities for unqualified media practitioners to get their formal training through evening and weekender programs. Secondly, the challenge of inexperienced graduates was mitigated by designing the program to adopt the experiential learning approach which many refer to as the ‘Teaching Hospital Model’. This entails integrating practice to theory - similar to the way medical students engage in hands-on practice under the supervision of a mentor. The university drew a Memorandum of Understanding (MoU) with reputable media houses for students and faculty to use their studios for hands-on experience and for seasoned media practitioners to guest-teach some courses. With the convergence functions of media industry today, graduates should be trained to have adequate knowledge of other disciplines; therefore, the curriculum integrated cognate courses that would render graduates versatile. Ultimately, this research serves as a template for African colleges and universities to follow in their quest to glocalize their curricula. While the general concept of journalism may remain western, journalism curriculum developers in Africa through extensive assessment of needs, and focusing on those needs and other societal particularities, can adjust the western module to fit their local needs.Keywords: curriculum co-development, glocalization of journalism education, international journalism, needs assessment
Procedia PDF Downloads 129122 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 122121 A Text in Movement in the Totonac Flyers’ Dance: A Performance-Linguistic Theory
Authors: Luisa Villani
Abstract:
The proposal aims to express concerns about the connection between mind, body, society, and environment in the Flyers’ dance, a very well-known rotatory dance in Mexico, to create meanings and to make the apprehension of the world possible. The interaction among the brain, mind, body, and environment, and the intersubjective relation among them, means the world creates and recreates a social interaction. The purpose of this methodology, based on the embodied cognition theory, which was named “A Performance-Embodied Theory” is to find the principles and patterns that organize the culture and the rules of the apprehension of the environment by Totonac people while the dance is being performed. The analysis started by questioning how anthropologists can interpret how Totonacs transform their unconscious knowledge into conscious knowledge and how the scheme formation of imagination and their collective imagery is understood in the context of public-facing rituals, such as Flyers’ dance. The problem is that most of the time, researchers interpret elements in a separate way and not as a complex ritual dancing whole, which is the original contribution of this study. This theory, which accepts the fact that people are body-mind agents, wants to interpret the dance as a whole, where the different elements are joined to an integral interpretation. To understand incorporation, data was recollected in prolonged periods of fieldwork, with participant observation and linguistic and extralinguistic data analysis. Laban’s notation for the description and analysis of gestures and movements in the space was first used, but it was later transformed and gone beyond this method, which is still a linear and compositional one. Performance in a ritual is the actualization of a potential complex of meanings or cognitive domains among many others in a culture: one potential dimension becomes probable and then real because of the activation of specific meanings in a context. It can only be thought what language permits thinking, and the lexicon that is used depends on the individual culture. Only some parts of this knowledge can be activated at once, and these parts of knowledge are connected. Only in this way, the world can be understood. It can be recognized that as languages geometrize the physical world thanks to the body, also ritual does. In conclusion, the ritual behaves as an embodied grammar or a text in movement, which, depending on the ritual phases and the words and sentences pronounced in the ritual, activates bits of encyclopedic knowledge that people have about the world. Gestures are not given by the performer but emerge from the intentional perception in which gestures are “understood” by the audio-spectator in an inter-corporeal way. The impact of this study regards the possibility not only to disseminate knowledge effectively but also to generate a balance between different parts of the world where knowledge is shared, rather than being received by academic institutions alone. This knowledge can be exchanged, so indigenous communities and academies could be together as part of the activation and the sharing of this knowledge with the world.Keywords: dance, flyers, performance, embodied, cognition
Procedia PDF Downloads 58120 Energy Audit and Renovation Scenarios for a Historical Building in Rome: A Pilot Case Towards the Zero Emission Building Goal
Authors: Domenico Palladino, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Silvia Di Turi
Abstract:
The aim to achieve a fully decarbonized building stock by 2050 stands as one of the most challenging issues within the spectrum of energy and climate objectives. Numerous strategies are imperative, particularly emphasizing the reduction and optimization of energy demand. Ensuring the high energy performance of buildings emerges as a top priority, with measures aimed at cutting energy consumptions. Concurrently, it is imperative to decrease greenhouse gas emissions by using renewable energy sources for the on-site energy production, thereby striving for an energy balance leading towards zero-emission buildings. Italy's predominant building stock comprises ancient buildings, many of which hold historical significance and are subject to stringent preservation and conservation regulations. Attaining high levels of energy efficiency and reducing CO2 emissions in such buildings poses a considerable challenge, given their unique characteristics and the imperative to adhere to principles of conservation and restoration. Additionally, conducting a meticulous analysis of these buildings' current state is crucial for accurately quantifying their energy performance and predicting the potential impacts of proposed renovation strategies on energy consumption reduction. Within this framework, the paper presents a pilot case in Rome, outlining a methodological approach for the renovation of historic buildings towards achieving Zero Emission Building (ZEB) objective. The building has a mixed function with offices, a conference hall, and an exposition area. The building envelope is made of historical and precious materials used as cladding which must be preserved. A thorough understanding of the building's current condition serves as a prerequisite for analyzing its energy performance. This involves conducting comprehensive archival research, undertaking on-site diagnostic examinations to characterize the building envelope and its systems, and evaluating actual energy usage data derived from energy bills. Energy simulations and audit are the first step in the analysis with the assessment of the energy performance of the actual current state. Subsequently, different renovation scenarios are proposed, encompassing advanced building techniques, to pinpoint the key actions necessary for improving mechanical systems, automation and control systems, and the integration of renewable energy production. These scenarios entail different levels of renovation, ranging from meeting minimum energy performance goals to achieving the highest possible energy efficiency level. The proposed interventions are meticulously analyzed and compared to ascertain the feasibility of attaining the Zero Emission Building objective. In conclusion, the paper provides valuable insights that can be extrapolated to inform a broader approach towards energy-efficient refurbishment of historical buildings that may have limited potential for renovation in their building envelopes. By adopting a methodical and nuanced approach, it is possible to reconcile the imperative of preserving cultural heritage with the pressing need to transition towards a sustainable, low-carbon future.Keywords: energy conservation and transition, energy efficiency in historical buildings, buildings energy performance, energy retrofitting, zero emission buildings, energy simulation
Procedia PDF Downloads 68119 Natural Monopolies and Their Regulation in Georgia
Authors: Marina Chavleishvili
Abstract:
Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.Keywords: monopolies, natural monopolies, regulation, antimonopoly service
Procedia PDF Downloads 86118 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates
Authors: Jiro Takagi
Abstract:
Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses
Procedia PDF Downloads 226117 The Vanishing Treasure: An Anthropological Study on Changing Social Relationships, Values, Belief System and Language Pattern of the Limbus in Kalimpong Sub-Division of the Darjeeling District in West Bengal, India
Authors: Biva Samadder, Samita Manna
Abstract:
India is a melting pot of races, tribes, castes and communities. The population of India can be roughly branched into the huge majority of “Civilized” Indians of the Plains and the minority of Tribal population of the hill area and the forest who constituting almost 16 percent of total population of India. The Kirat community composed of four ethnic tribes: Limbu, Lepcha, Dhimal, and Rai. These Kirat people were found to be rich in indigenous knowledge, skill and practices especially for the use on medicinal plants and livelihood purposes. The “Mundhum" is the oral scripture or the “Bible of the Limbus” which serves as the canon of the codes of the Limbu socialization, their moral values and the very orientation of their lifestyle. From birth till death the Limbus are disciplined in the life with full of religious rituals, traditions and culture governed by community norms with a rich legacy of indigenous knowledge and traditional practices. The present study has been conducted using both secondary as well as primary data by applying social methodology consisting of the social survey, questionnaire, interviews and observations in the Kalimpong Block-I of Darjeeling District of west Bengal of India, which is a heterogeneous zone in terms of its ethnic composition and where the Limbus are pre-dominantly concentrated. Due to their close contact with other caste and communities Limbus are now adjusted with the changing situation by borrowing some cultural traits from the other communities and changes that have taken place in their cultural practices, religious beliefs, economic aspects, languages and in social roles and relationships which is bringing the change in their material culture. Limbu language is placed in the Tibeto- Burman Language category. But due to the political and cultural domination of educationally sound and numerically dominant Bengali race, the different communities in this area forced to come under the one umbrella of the Nepali or Gorkhali nation (nation-people). Their respective identities had to be submerged in order to constitute as a strong force to resist Nepali domination and ensure their common survival. As Nepali is a lingua-franca of the area knowing and speaking Nepali language helps them in procuring economic and occupational facilities. Ironically, present day younger generation does not feel comfortable speaking in their own Limbu tongue. The traditional knowledge about medicinal plants, healing, and health culture is found to be wear away due to the lack of interest of young generation. Not only poverty, along with exclusion due to policies they are in the phase of extinction, but their capabilities are ignored and not documented and preserved especially in the case of Limbus who having a great cultural heritage of an oral tradition. Attempts have been made to discuss the persistence and changes in socioeconomic pattern of life in relation to the social structure, material culture, cultural practices, social relationships, indigenous technology, ethos and their values and belief system.Keywords: changing social relationship, cultural transition, identity, indigenous knowledge, language
Procedia PDF Downloads 172116 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 76115 Congruency of English Teachers’ Assessments Vis-à-Vis 21st Century Skills Assessment Standards
Authors: Mary Jane Suarez
Abstract:
A massive educational overhaul has taken place at the onset of the 21st century addressing the mismatches of employability skills with that of scholastic skills taught in schools. For a community to thrive in an ever-developing economy, the teaching of the necessary skills for job competencies should be realized by every educational institution. However, in harnessing 21st-century skills amongst learners, teachers, who often lack familiarity and thorough insights into the emerging 21st-century skills, are chained with the restraint of the need to comprehend the physiognomies of 21st-century skills learning and the requisite to implement the tenets of 21st-century skills teaching. With the endeavor to espouse 21st-century skills learning and teaching, a United States-based national coalition called Partnership 21st Century Skills (P21) has identified the four most important skills in 21st-century learning: critical thinking, communication, collaboration, and creativity and innovation with an established framework for 21st-century skills standards. Assessment of skills is the lifeblood of every teaching and learning encounter. It is correspondingly crucial to look at the 21st century standards and the assessment guides recognized by P21 to ensure that learners are 21st century ready. This mixed-method study sought to discover and describe what classroom assessments were used by English teachers in a public secondary school in the Philippines with course offerings on science, technology, engineering, and mathematics (STEM). The research evaluated the assessment tools implemented by English teachers and how these assessment tools were congruent to the 21st assessment standards of P21. A convergent parallel design was used to analyze assessment tools and practices in four phases. In the data-gathering phase, survey questionnaires, document reviews, interviews, and classroom observations were used to gather quantitative and qualitative data simultaneously, and how assessment tools and practices were consistent with the P21 framework with the four Cs as its foci. In the analysis phase, the data were treated using mean, frequency, and percentage. In the merging and interpretation phases, a side-by-side comparison was used to identify convergent and divergent aspects of the results. In conclusion, the results yielded assessments tools and practices that were inconsistent, if not at all, used by teachers. Findings showed that there were inconsistencies in implementing authentic assessments, there was a scarcity of using a rubric to critically assess 21st skills in both language and literature subjects, there were incongruencies in using portfolio and self-reflective assessments, there was an exclusion of intercultural aspects in assessing the four Cs and the lack of integrating collaboration in formative and summative assessments. As a recommendation, a harmonized assessment scheme of P21 skills was fashioned for teachers to plan, implement, and monitor classroom assessments of 21st-century skills, ensuring the alignment of such assessments to P21 standards for the furtherance of the institution’s thrust to effectively integrate 21st-century skills assessment standards to its curricula.Keywords: 21st-century skills, 21st-century skills assessments, assessment standards, congruency, four Cs
Procedia PDF Downloads 193114 Glasshouse Experiment to Improve Phytomanagement Solutions for Cu-Polluted Mine Soils
Authors: Marc Romero-Estonllo, Judith Ramos-Castro, Yaiza San Miguel, Beatriz Rodríguez-Garrido, Carmela Monterroso
Abstract:
Mining activity is among the main sources of trace and heavy metal(loid) pollution worldwide, which is a hazard to human and environmental health. That is why several projects have been emerging for the remediation of such polluted places. Phytomanagement strategies draw good performances besides big side benefits. In this work, a glasshouse assay with trace element polluted soils from an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE Project (SOE1/P5/E0189)) was set. The objective was to evaluate improvements induced by the following phytoremediation-related treatments. Three increasingly complex amendments alone or together with plant growth (Populus nigra L. alone and together with Tripholium repens L.) were tested. And three different rhizosphere bioinocula were applied (Plant Growth Promoting Bacteria (PGP), mycorrhiza (MYC), or mixed (PGP+MYC)). After 110 days of growth, plants were collected, biomass was weighed, and tree length was measured. Physical-chemical analyses were carried out to determine pH, effective Cation Exchange Capacity, carbon and nitrogen contents, bioavailable phosphorous (Olsen bicarbonate method), pseudo total element content (microwave acid digested fraction), EDTA extractable metals (complexed fraction), and NH4NO3 extractable metals (easily bioavailable fraction). On plant material, nitrogen content and acid digestion elements were determined. Amendment usage, plant growth, and bioinoculation were demonstrated to improve soil fertility and/or plant health within the time span of this study. Particularly, pH levels increased from 3 (highly acidic) to 5 (acidic) in the worst-case scenario, even reaching 7 (neutrality) in the best plots. Organic matter and pH increments were related to polluting metals’ bioavailability decrements. Plants grew better both with the most complex amendment and the middle one, with few differences due to bioinoculation. Using the less complex amendment (just compost) beneficial effects of bioinoculants were more observable, although plants didn’t thrive very well. On unamended soils, plants neither sprouted nor bloomed. The scheme assayed in this study is suitable for phytomanagement of these kinds of soils affected by mining activity. These findings should be tested now on a larger scale.Keywords: aided phytoremediation, mine pollution, phytostabilization, soil pollution, trace elements
Procedia PDF Downloads 66113 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches
Authors: Maria Ledinskaya
Abstract:
This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: project management, conservation, waterfall, agile, lean, hybrid
Procedia PDF Downloads 99112 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis
Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli
Abstract:
This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE
Procedia PDF Downloads 105111 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224110 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X
Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira
Abstract:
An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.Keywords: boundary-layer, scramjet, simple algorithm, shock wave
Procedia PDF Downloads 491109 Impacts of School-Wide Positive Behavioral Interventions and Supports on Student Academics, Behavior and Mental Health
Authors: Catherine Bradshaw
Abstract:
Educators often report difficulty managing behavior problems and other mental health concerns that students display at school. These concerns also interfere with the learning process and can create distraction for teachers and other students. As such, schools play an important role in both preventing and intervening with students who experience these types of challenges. A number of models have been proposed to serve as a framework for delivering prevention and early intervention services in schools. One such model is called Positive Behavioral Interventions and Supports (PBIS), which has been scaled-up to over 26,000 schools in the U.S. and many other countries worldwide. PBIS aims to improve a range of student outcomes through early detection of and intervention related to behavioral and mental health symptoms. PBIS blends and applies social learning, behavioral, and organizational theories to prevent disruptive behavior and enhance the school’s organizational health. PBIS focuses on creating and sustaining tier 1 (universal), tier 2 (selective), and tier 3 (individual) systems of support. Most schools using PBIS have focused on the core elements of the tier 1 supports, which includes the following critical features. The formation of a PBIS team within the school to lead implementation. Identification and training of a behavioral support ‘coach’, who serves as a on-site technical assistance provider. Many of the individuals identified to serve as a PBIS coach are also trained as a school psychologist or guidance counselor; coaches typically have prior PBIS experience and are trained to conduct functional behavioral assessments. The PBIS team also identifies a set of three to five positive behavioral expectations that are implemented for all students and by all staff school-wide (e.g., ‘be respectful, responsible, and ready to learn’); these expectations are posted in all settings across the school, including in the classroom, cafeteria, playground etc. All school staff define and teach the school-wide behavioral expectations to all students and review them regularly. Finally, PBIS schools develop or adopt a school-wide system to reward or reinforce students who demonstrate those 3-5 positive behavioral expectations. Staff and administrators create an agreed upon system for responding to behavioral violations that include definitions about what constitutes a classroom-managed vs. an office-managed discipline problem. Finally, a formal system is developed to collect, analyze, and use disciplinary data (e.g., office discipline referrals) to inform decision-making. This presentation provides a brief overview of PBIS and reports findings from a series of four U.S. based longitudinal randomized controlled trials (RCTs) documenting the impacts of PBIS on school climate, discipline problems, bullying, and academic achievement. The four RCTs include 80 elementary, 40 middle, and 58 high schools and results indicate a broad range of impacts on multiple student and school-wide outcomes. The session will highlight lessons learned regarding PBIS implementation and scale-up. We also review the ways in which PBIS can help educators and school leaders engage in data-based decision-making and share data with other decision-makers and stakeholders (e.g., students, parents, community members), with the overarching goal of increasing use of evidence-based programs in schools.Keywords: positive behavioral interventions and supports, mental health, randomized trials, school-based prevention
Procedia PDF Downloads 230108 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 247107 The Effect of Zeolite and Fertilizers on Yield and Qualitative Characteristics of Cabbage in the Southeast of Kazakhstan
Authors: Tursunay Vassilina, Aigerim Shibikeyeva, Adilet Sakhbek
Abstract:
Research has been carried out to study the influence of modified zeolite fertilizers on the quantitative and qualitative indicators of cabbage variety Nezhenka. The use of zeolite and mineral fertilizers had a positive effect on both the yield and quality indicators of the studied crop. The maximum increase in yield from fertilizers was 16.5 t/ha. Application of both zeolite and fertilizer increased the dry matter, sugar and vitamin C content of cabbage heads. It was established that the cabbage contains an amount of nitrates that is safe for human health. Among vegetable crops, cabbage has both food and feed value. One of the limiting factors in the sale of vegetable crops is the degradation of soil fertility due to depletion of nutrient reserves and erosion processes, and non-compliance with fertilizer application technologies. Natural zeolites are used as additives to mineral fertilizers for application in the field, which makes it possible to reduce their doses to minimal quantities. Zeolites improve the agrophysical and agrochemical properties of the soil and the quality of plant products. The research was carried out in a field experiment, carried out in 3 repetitions, on dark chestnut soil in 2023. The soil (pH = 7.2-7.3) of the experimental plot is dark chestnut, the humus content in the arable layer is 2.15%, gross nitrogen 0.098%, phosphorus, potassium 0.225 and 2.4%, respectively. The object of the study was the late cabbage variety Nezhenka. Scheme for applying fertilizers to cabbage: 1. Control (without fertilizers); 2. Zeolite 2t/ha; 3. N45P45K45; 4. N90P90K90; 5. Zeolite, 2 t/ha + N45P45K45; 6. Zeolite, 2 t/ha + N90P90K90. Yield accounting was carried out on a plot-by-plot basis manually. In plant samples, the following was determined: dry matter content by thermostatic method (at 105ºC); sugar content by Bertrand titration method, nitrate content by 1% diphenylamine solution, vitamin C by titrimetric method with acid solution. According to the results, it was established that the yield of cabbage was high – 42.2 t/ha in the treatment Zeolite, 2 t/ha + N90P90K90. When determining the biochemical composition of white cabbage, it was found that the dry matter content was 9.5% and increased with fertilized treatments. The total sugar content increased slightly with the use of zeolite (5.1%) and modified zeolite fertilizer (5.5%), the vitamin C content ranged from 17.5 to 18.16%, while in the control, it was 17.21%. The amount of nitrates in products also increased with increasing doses of nitrogen fertilizers and decreased with the use of zeolite and modified zeolite fertilizer but did not exceed the maximum permissible concentration. Based on the research conducted, it can be concluded that the application of zeolite and fertilizers leads to a significant increase in yield compared to the unfertilized treatment; contribute to the production of cabbage with good and high quality indicators.Keywords: cabbage, dry matter, nitrates, total sugar, yield, vitamin C
Procedia PDF Downloads 73106 Estimating Evapotranspiration Irrigated Maize in Brazil Using a Hybrid Modelling Approach and Satellite Image Inputs
Authors: Ivo Zution Goncalves, Christopher M. U. Neale, Hiran Medeiros, Everardo Mantovani, Natalia Souza
Abstract:
Multispectral and thermal infrared imagery from satellite sensors coupled with climate and soil datasets were used to estimate evapotranspiration and biomass in center pivots planted to maize in Brazil during the 2016 season. The hybrid remote sensing based model named Spatial EvapoTranspiration Modelling Interface (SETMI) was applied using multispectral and thermal infrared imagery from the Landsat Thematic Mapper instrument. Field data collected by the IRRIGER center pivot management company included daily weather information such as maximum and minimum temperature, precipitation, relative humidity for estimating reference evapotranspiration. In addition, soil water content data were obtained every 0.20 m in the soil profile down to 0.60 m depth throughout the season. Early season soil samples were used to obtain water-holding capacity, wilting point, saturated hydraulic conductivity, initial volumetric soil water content, layer thickness, and saturated volumetric water content. Crop canopy development parameters and irrigation application depths were also inputs of the model. The modeling approach is based on the reflectance-based crop coefficient approach contained within the SETMI hybrid ET model using relationships developed in Nebraska. The model was applied to several fields located in Minas Gerais State in Brazil with approximate latitude: -16.630434 and longitude: -47.192876. The model provides estimates of real crop evapotranspiration (ET), crop irrigation requirements and all soil water balance outputs, including biomass estimation using multi-temporal satellite image inputs. An interpolation scheme based on the growing degree-day concept was used to model the periods between satellite inputs, filling the gaps between image dates and obtaining daily data. Actual and accumulated ET, accumulated cold temperature and water stress and crop water requirements estimated by the model were compared with data measured at the experimental fields. Results indicate that the SETMI modeling approach using data assimilation, showed reliable daily ET and crop water requirements for maize, interpolated between remote sensing observations, confirming the applicability of the SETMI model using new relationships developed in Nebraska for estimating mainly ET and water requirements in Brazil under tropical conditions.Keywords: basal crop coefficient, irrigation, remote sensing, SETMI
Procedia PDF Downloads 140105 The Securitization of the European Migrant Crisis (2015-2016): Applying the Insights of the Copenhagen School of Security Studies to a Comparative Analysis of Refugee Policies in Bulgaria and Hungary
Authors: Tatiana Rizova
Abstract:
The migrant crisis, which peaked in 2015-2016, posed an unprecedented challenge to the European Union’s (EU) newest member states, including Bulgaria and Hungary. Their governments had to formulate sound migration policies with expediency and sensitivity to the needs of millions of people fleeing violent conflicts in the Middle East and failed states in North Africa. Political leaders in post-communist countries had to carefully coordinate with other EU member states on joint policies and solutions while minimizing the risk of alienating their increasingly anti-migrant domestic constituents. Post-communist member states’ governments chose distinct policy responses to the crisis, which were dictated by factors such as their governments’ partisan stances on migration, their views of the European Union, and the decision to frame the crisis as a security or a humanitarian issue. This paper explores how two Bulgarian governments (Boyko Borisov’s second and third government formed during the 43rd and 44th Bulgarian National Assembly, respectively) navigated the processes of EU migration policy making and managing the expectations of their electorates. Based on a comparative analysis of refugee policies in Bulgaria and Hungary during the height of the crisis (2015-2016) and a temporal analysis of refugee policies in Bulgaria (2015-2018), the paper advances the following conclusions. Drawing on insights of the Copenhagen school of security studies, the paper argues that cultural concerns dominated domestic debates in both Bulgaria and Hungary; both governments framed the issue predominantly as a matter of security rather than humanitarian disaster. Regardless of the similarities in issue framing, however, the two governments sought different paths of tackling the crisis. While the Bulgarian government demonstrated its willingness to comply with EU decisions (such as the proposal for mandatory quotas for refugee relocation), the Hungarian government defied EU directives and became a leading voice of dissent inside the EU. The current Bulgarian government (April 2017 - present) appears to be committed to complying with EU decisions and accepts the strategy of EU burden-sharing, while the Hungarian government has continually snubbed the EU’s appeals for cooperation despite the risk of hefty financial penalties. Hungary’s refugee policies have been influenced by the parliamentary representation of the far right-wing party Movement for a Better Hungary (Jobbik), which has encouraged the majority party (FIDESZ) to adopt harsher anti-migrant rhetoric and more hostile policies toward refugees. Bulgaria’s current government is a coalition of the center-right Citizens for a European Development of Bulgaria (GERB) and its far right-wing junior partners – the United Patriots (comprised of three nationalist political parties). The parliamentary presence of Jobbik in Hungary’s parliament has magnified the anti-migrant stance, rhetoric, and policies of Mr. Orbán’s Civic Alliance; we have yet to observe a substantial increase in the anti-migrant rhetoric and policies in Bulgaria’s case. Analyzing responses to the migrant/refugee crisis is a critical opportunity to understand how issues of cultural identity and belonging, inclusion and exclusion, regional integration and disintegration are debated and molded into policy in Europe’s youngest member states in the broader EU context.Keywords: Copenhagen School, migrant crisis, refugees, security
Procedia PDF Downloads 121104 The Role of Personality Traits and Self-Efficacy in Shaping Teaching Styles: Insights from Indian Higher Education Faculty
Authors: Pritha Niraj Arya
Abstract:
Education plays a crucial role in societal evolution by promoting economic expansion and creativity. The varied demands of students in India’s higher education setting signify inclusive and efficient teaching methods. The present study examined how teaching styles, self-efficacy, and personality traits interact among Indian higher education faculty members and how these factors collectively affect pedagogical practices. Specifically, the research explored differences in personality traits -agreeableness, conscientiousness, neuroticism, openness, and extraversion- between teachers with high and low self-efficacy and examined how these traits shape teaching strategies, either student-focused or teacher-focused. Data collection took place for three months, ensuring confidentiality and ethical compliance. 268 faculty members from Indian higher education institutions participated in this comparative study. An online questionnaire was used to gather data in which participants completed three well-established tools: the approaches to teaching inventory, which measures teaching styles; the teacher self-efficacy questionnaire, which measures self-efficacy levels; and the big five inventory, which measures personality traits. The results showed that while teachers with low self-efficacy had higher levels of neuroticism, those with high self-efficacy scored much higher on traits such as agreeableness, conscientiousness, openness, and extraversion. Despite the traditional belief that high self-efficacy is only associated with student-focused teaching, the findings suggest that teachers with high self-efficacy have cognitive flexibility, which enables them to skillfully use both teacher-focused and student-focused approaches to cater to a wide range of classroom needs. Teachers with low self-efficacy, on the other hand, are less flexible and adopt fewer different strategies in their teaching practice. The findings challenge simplistic associations between self-efficacy and teaching strategies, emphasising that high self-efficacy promotes adaptability rather than a fixed preference for specific teaching methods. This adaptability is crucial in India’s diverse educational settings, where teachers must balance standardised curricula with the varied learning needs of students. This study highlights the importance of integrating personality traits and self-efficacy into teacher training programs. By promoting self-efficacy and tailoring professional development to consider individual personality traits, institutions can enhance teachers’ teaching flexibility, hence improving student engagement and learning outcomes. These findings have practical implications for teacher education, suggesting that adopting cognitive flexibility among teachers can improve instructional quality and classroom dynamics. To gain a deeper knowledge of how personality traits and self-efficacy impact teaching practices over time, future research should investigate causal relationships using longitudinal studies. Examining external factors like institutional policies, availability of resources, and cultural settings will help to clarify the dynamics at play. Furthermore, this study emphasises the need to strike a balance between teacher-focused and student-focused approaches to provide a comprehensive education that covers both conceptual understanding and the delivery of key information. This study offers insights into how the Indian educational system is changing and how, to achieve global standards, effective teaching techniques are becoming increasingly important. This study promotes the larger objective of educational excellence by exploring the interaction of internal and external factors impacting teaching styles and providing practical policy and practice recommendations.Keywords: higher education, personality traits, self-efficacy, teaching styles
Procedia PDF Downloads 1103 Marketing and Business Intelligence and Their Impact on Products and Services Through Understanding Based on Experiential Knowledge of Customers in Telecommunications Companies
Authors: Ali R. Alshawawreh, Francisco Liébana-Cabanillas, Francisco J. Blanco-Encomienda
Abstract:
Collaboration between marketing and business intelligence (BI) is crucial in today's ever-evolving business landscape. These two domains play pivotal roles in molding customers' experiential knowledge. Marketing insights offer valuable information regarding customer needs, preferences, and behaviors. Conversely, BI facilitates data-driven decision-making, leading to heightened operational efficiency, product quality, and customer satisfaction. Customer experiential knowledge (CEK) encompasses customers' implicit comprehension of consumption experiences influenced by diverse factors, including social and cultural influences. This study primarily focuses on telecommunications companies in Jordan, scrutinizing how experiential customer knowledge mediates the relationship between marketing intelligence and business intelligence. Drawing on theoretical frameworks such as the resource-based view (RBV) and service-dominant logic (SDL), the research aims to comprehend how organizations utilize their resources, particularly knowledge, to foster Evolution. Employing a quantitative research approach, the study collected and analyzed primary data to explore hypotheses. Structural equation modeling (SEM) facilitated by Smart PLS software evaluated the relationships between the constructs, followed by mediation analysis to assess the indirect associations in the model. The study findings offer insights into the intricate dynamics of organizational Creation, uncovering the interconnected relationships between business intelligence, customer experiential knowledge-based innovation (CEK-DI), marketing intelligence (MI), and product and service innovation (PSI), underscoring the pivotal role of advanced intelligence capabilities in developing innovative practices rooted in a profound understanding of customer experiences. Furthermore, the positive impact of BI on PSI reaffirms the significance of data-driven decision-making in shaping the innovation landscape. The significant impact of CEK-DI on PSI highlights the critical role of customer experiences in driving an organization. Companies that actively integrate customer insights into their opportunity creation processes are more likely to create offerings that match customer expectations, which drives higher levels of product and service sophistication. Additionally, the positive and significant impact of MI on CEK-DI underscores the critical role of market insights in shaping evolutionary strategies. While the relationship between MI and PSI is positive, the slightly weaker significance level indicates a subtle association, suggesting that while MI contributes to the development of ideas, In conclusion, the study emphasizes the fundamental role of intelligence capabilities, especially artificial intelligence, emphasizing the need for organizations to leverage market and customer intelligence to achieve effective and competitive innovation practices. Collaborative efforts between marketing and business intelligence serve as pivotal drivers of development, influencing customer experiential knowledge and shaping organizational strategies and practices. Future research could adopt longitudinal designs and gather data from various sectors to offer broader insights. Additionally, the study focuses on the effects of marketing intelligence, business intelligence, customer experiential knowledge, and innovation, but other unexamined variables may also influence innovation processes. Future studies could investigate additional factors, mediators, or moderators, including the role of emerging technologies like AI and machine learning in driving innovation.Keywords: marketing intelligence, business intelligence, product, customer experiential knowledge-driven innovation
Procedia PDF Downloads 32102 Synthesis, Molecular Modeling and Study of 2-Substituted-4-(Benzo[D][1,3]Dioxol-5-Yl)-6-Phenylpyridazin-3(2H)-One Derivatives as Potential Analgesic and Anti-Inflammatory Agents
Authors: Jyoti Singh, Ranju Bansal
Abstract:
Fighting pain and inflammation is a common problem faced by physicians while dealing with a wide variety of diseases. Since ancient time nonsteroidal anti-inflammatory agents (NSAIDs) and opioids have been the cornerstone of treatment therapy, however, the usefulness of both these classes is limited due to severe side effects. NSAIDs, which are mainly used to treat mild to moderate inflammatory pain, induce gastric irritation and nephrotoxicity whereas opioids show an array of adverse reactions such as respiratory depression, sedation, and constipation. Moreover, repeated administration of these drugs induces tolerance to the analgesic effects and physical dependence. Further discovery of selective COX-2 inhibitors (coxibs) suggested safety without any ulcerogenic side effects; however, long-term use of these drugs resulted in kidney and hepatic toxicity along with an increased risk of secondary cardiovascular effects. The basic approaches towards inflammation and pain treatment are constantly changing, and researchers are continuously trying to develop safer and effective anti-inflammatory drug candidates for the treatment of different inflammatory conditions such as osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, psoriasis and multiple sclerosis. Synthetic 3(2H)-pyridazinones constitute an important scaffold for drug discovery. Structure-activity relationship studies on pyridazinones have shown that attachment of a lactam at N-2 of the pyridazinone ring through a methylene spacer results in significantly increased anti-inflammatory and analgesic properties of the derivatives. Further introduction of the heterocyclic ring at lactam nitrogen results in improvement of biological activities. Keeping in mind these SAR studies, a new series of compounds were synthesized as shown in scheme 1 and investigated for anti-inflammatory, analgesic, anti-platelet activities and docking studies. The structures of newly synthesized compounds have been established by various spectroscopic techniques. All the synthesized pyridazinone derivatives exhibited potent anti-inflammatory and analgesic activity. Homoveratryl substituted derivative was found to possess highest anti-inflammatory and analgesic activity displaying 73.60 % inhibition of edema at 40 mg/kg with no ulcerogenic activity when compared to standard drugs indomethacin. Moreover, 2-substituted-4-benzo[d][1,3]dioxole-6-phenylpyridazin-3(2H)-ones derivatives did not produce significant changes in bleeding time and emerged as safe agents. Molecular docking studies also illustrated good binding interactions at the active site of the cyclooxygenase-2 (hCox-2) enzyme.Keywords: anti-inflammatory, analgesic, pyridazin-3(2H)-one, selective COX-2 inhibitors
Procedia PDF Downloads 200101 Modern Technology for Strengthening Concrete Structures Makes Them Resistant to Earthquakes
Authors: Mohsen Abdelrazek Khorshid Ali Selim
Abstract:
Disadvantages and errors of current concrete reinforcement methodsL: Current concrete reinforcement methods are adopted in most parts of the world in their various doctrines and names. They adopt the so-called concrete slab system, where these slabs are semi-independent and isolated from each other and from the surrounding environment of concrete columns or beams, so that the reinforcing steel does not cross from one slab to another or from one slab to adjacent columns. It or the beams surrounding it and vice versa are only a few centimeters and no more. The same applies exactly to the concrete columns that support the building, where the reinforcing steel does not extend from the slabs or beams to the inside of the columns or vice versa except for a few centimeters and no more, just as the reinforcing steel does not extend from inside the column at the top. The ceiling is only a few centimetres, and the same thing is literally repeated in the concrete beams that connect the columns and separate the slabs, where the reinforcing steel does not cross from one beam to another or from one beam to the slabs or columns adjacent to it and vice versa, except for a few centimeters, which makes the basic building elements of columns, slabs and beams They all work in isolation from each other and from the environment surrounding them from all sides. This traditional method of reinforcement may be valid and lasting in geographical areas that are not exposed to earthquakes and earthquakes, where all the loads and tensile forces in the building are constantly directed vertically downward due to gravity and are borne directly by the vertical reinforcement of the building. However, in the case of earthquakes and earthquakes, the loads and tensile forces in the building shift from the vertical direction to the horizontal direction at an angle of inclination that depends on the strength of the earthquake, and most of them are borne by the horizontal reinforcement extending between the basic elements of the building, such as columns, slabs and beams, and since the crossing of the reinforcement between each of the columns, slabs and beams between them And each other, and vice versa, does not exceed several centimeters. In any case, the tensile strength, cohesion and bonding between the various parts of the building are very weak, which causes the buildings to disintegrate and collapse in the horrific manner that we saw in the earthquake in Turkey and Syria in February 2023, which caused the collapse of tens of thousands of buildings in A few seconds later, it left more than 50,000 dead, hundreds of thousands injured, and millions displaced. Description of the new earthquake-resistant model: The idea of the new model in the reinforcement of concrete buildings and constructions is based on the theory that we have formulated as follows: [The tensile strength, cohesion and bonding between the basic parts of the concrete building (columns, beams and slabs) increases as the lengths of the reinforcing steel bars increase and they extend and branch and the different parts of the building share them with each other.] . In other words, the strength, solidity, and cohesion of concrete buildings increase and they become resistant to earthquakes as the lengths of the reinforcing steel bars increase, extend, branch, and share with the various parts of the building, such as columns, beams, and slabs. That is, the reinforcing skewers of the columns must extend in their lengths without cutting to cross from one floor to another until their end. Likewise, the reinforcing skewers of the beams must extend in their lengths without cutting to cross from one beam to another. The ends of these skewers must rest at the bottom of the columns adjacent to the beams. The same thing applies to the reinforcing skewers of the slabs where they must These skewers should be extended in their lengths without cutting to cross from one tile to another, and the ends of these skewers should rest either under the adjacent columns or inside the beams adjacent to the slabs as follows: First, reinforce the columns: The columns have the lion's share of the reinforcing steel in this model in terms of type and quantity, as the columns contain two types of reinforcing bars. The first type is large-diameter bars that emerge from the base of the building, which are the nerves of the column. These bars must extend over their normal length of 12 meters or more and extend to a height of three floors, if desired. In raising other floors, bars with the same diameter and the same length are added to the top after the second floor. The second type is bars with a smaller diameter, and they are the same ones that are used to reinforce beams and slabs, so that the bars that reinforce the beams and slabs facing each column are bent down inside this column and along the entire length of the column. This requires an order. Most engineers do not prefer it, which is to pour the entire columns and pour the roof at once, but we prefer this method because it enables us to extend the reinforcing bars of both the beams and slabs to the bottom of the columns so that the entire building becomes one concrete block that is cohesive and resistant to earthquakes. Secondly, arming the cameras: The beams' reinforcing skewers must also extend to a full length of 12 meters or more without cutting. The ends of the skewers are bent and dropped inside the column at the beginning of the beam to its bottom. Then the skewers are extended inside the beam so that their other end falls under the facing column at the end of the beam. The skewers may cross over the head of a column. Another passes through another adjacent beam and rests at the bottom of a third column, according to the lengths of each of the skewers and beams. Third, reinforcement of slabs: The slab reinforcing skewers must also extend their entire length, 12 meters or more, without cutting, distinguishing between two cases. The first case is the skewers opposite the columns, and their ends are dropped inside one of the columns. Then the skewers cross inside the adjacent slab and their other end falls below the opposite column. The skewers may cross over The head of the adjacent column passes through another adjacent slab and rests at the bottom of a third column, according to the dimensions of the slabs and the lengths of the skewers. The second case is the skewers opposite the beams, and their ends must be bent in the form of a square or rectangle according to the dimensions of the beam’s width and height, and this square or rectangle is dropped inside the beam at the beginning of the slab, and it serves as The skewers are for the beams, then the skewers are extended along the length of the slab, and at the end of the slab, the skewers are bent down to the bottom of the adjacent beam in the shape of the letter U, after which the skewers are extended inside the adjacent slab, and this is repeated in the same way inside the other adjacent beams until the end of the skewer, then it is bent downward in the form of a square or rectangle inside the beam, as happened. In its beginning.Keywords: earthquake resistant buildings, earthquake resistant concrete constructions, new technology for reinforcement of concrete buildings, new technology in concrete reinforcement
Procedia PDF Downloads 64100 Green Architecture from the Thawing Arctic: Reconstructing Traditions for Future Resilience
Authors: Nancy Mackin
Abstract:
Historically, architects from Aalto to Gaudi to Wright have looked to the architectural knowledge of long-resident peoples for forms and structural principles specifically adapted to the regional climate, geology, materials availability, and culture. In this research, structures traditionally built by Inuit peoples in a remote region of the Canadian high Arctic provides a folio of architectural ideas that are increasingly relevant during these times of escalating carbon emissions and climate change. ‘Green architecture from the Thawing Arctic’ researches, draws, models, and reconstructs traditional buildings of Inuit (Eskimo) peoples in three remote, often inaccessible Arctic communities. Structures verified in pre-contact oral history and early written history are first recorded in architectural drawings, then modeled and, with the participation of Inuit young people, local scientists, and Elders, reconstructed as emergency shelters. Three full-sized building types are constructed: a driftwood and turf-clad A-frame (spring/summer); a stone/bone/turf house with inwardly spiraling walls and a fan-shaped floor plan (autumn); and a parabolic/catenary arch-shaped dome from willow, turf, and skins (autumn/winter). Each reconstruction is filmed and featured in a short video. Communities found that the reconstructed buildings and the method of involving young people and Elders in the reconstructions have on-going usefulness, as follows: 1) The reconstructions provide emergency shelters, particularly needed as climate change worsens storms, floods, and freeze-thaw cycles and scientists and food harvesters who must work out of the land become stranded more frequently; 2) People from the communities re-learned from their Elders how to use materials from close at hand to construct impromptu shelters; 3) Forms from tradition, such as windbreaks at entrances and using levels to trap warmth within winter buildings, can be adapted and used in modern community buildings and housing; and 4) The project initiates much-needed educational and employment opportunities in the applied sciences (engineering and architecture), construction, and climate change monitoring, all offered in a culturally-responsive way. Elders, architects, scientists, and young people added innovations to the traditions as they worked, thereby suggesting new sustainable, culturally-meaningful building forms and materials combinations that can be used for modern buildings. Adding to the growing interest in bio-mimicry, participants looked at properties of Arctic and subarctic materials such as moss (insulation), shrub bark (waterproofing), and willow withes (parabolic and catenary arched forms). ‘Green Architecture from the Thawing Arctic’ demonstrates the effective, useful architectural oeuvre of a resilient northern people. The research parallels efforts elsewhere in the world to revitalize long-resident peoples’ architectural knowledge, in the interests of designing sustainable buildings that reflect culture, heritage, and identity.Keywords: architectural culture and identity, climate change, forms from nature, Inuit architecture, locally sourced biodegradable materials, traditional architectural knowledge, traditional Inuit knowledge
Procedia PDF Downloads 52399 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography
Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner
Abstract:
Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.Keywords: CBCT, C-arm, reconstruction, trajectory optimization
Procedia PDF Downloads 132