Search results for: loyalty scheme
120 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0
Authors: Harris Niavis, Dimitra Politaki
Abstract:
The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.Keywords: blockchain, data quality, industry4.0, product quality
Procedia PDF Downloads 189119 Winter – Not Spring - Climate Drives Annual Adult Survival in Common Passerines: A Country-Wide, Multi-Species Modeling Exercise
Authors: Manon Ghislain, Timothée Bonnet, Olivier Gimenez, Olivier Dehorter, Pierre-Yves Henry
Abstract:
Climatic fluctuations affect the demography of animal populations, generating changes in population size, phenology, distribution and community assemblages. However, very few studies have identified the underlying demographic processes. For short-lived species, like common passerine birds, are these changes generated by changes in adult survival or in fecundity and recruitment? This study tests for an effect of annual climatic conditions (spring and winter) on annual, local adult survival at very large spatial (a country, 252 sites), temporal (25 years) and biological (25 species) scales. The Constant Effort Site ringing has allowed the collection of capture - mark - recapture data for 100 000 adult individuals since 1989, over metropolitan France, thus documenting annual, local survival rates of the most common passerine birds. We specifically developed a set of multi-year, multi-species, multi-site Bayesian models describing variations in local survival and recapture probabilities. This method allows for a statistically powerful hierarchical assessment (global versus species-specific) of the effects of climate variables on survival. A major part of between-year variations in survival rate was common to all species (74% of between-year variance), whereas only 26% of temporal variation was species-specific. Although changing spring climate is commonly invoked as a cause of population size fluctuations, spring climatic anomalies (mean precipitation or temperature for March-August) do not impact adult survival: only 1% of between-year variation of species survival is explained by spring climatic anomalies. However, for sedentary birds, winter climatic anomalies (North Atlantic Oscillation) had a significant, quadratic effect on adult survival, birds surviving less during intermediate years than during more extreme years. For migratory birds, we do not detect an effect of winter climatic anomalies (Sahel Rainfall). We will analyze the life history traits (migration, habitat, thermal range) that could explain a different sensitivity of species to winter climate anomalies. Overall, we conclude that changes in population sizes for passerine birds are unlikely to be the consequences of climate-driven mortality (or emigration) in spring but could be induced by other demographic parameters, like fecundity.Keywords: Bayesian approach, capture-recapture, climate anomaly, constant effort sites scheme, passerine, seasons, survival
Procedia PDF Downloads 303118 Recognition of a Thinly Bedded Distal Turbidite: A Case Study from a Proterozoic Delta System, Chaossa Formation, Simla Group, Western Lesser Himalaya, India
Authors: Priyanka Mazumdar, Ananya Mukhopadhyay
Abstract:
A lot of progress has been achieved in the research of turbidites during the last decades. However, their relationship to delta systems still deserves further attention. This paper addresses example of fine grained turbidite from a pro-deltaic deposit of a Proterozoic mixed energy delta system exposed along Chaossa-Baliana river section of the Chaossa Formation of the Simla Basin. Lithostratigraphic analysis of the Chaossa Formation reveals three major facies associations (prodelta deposit-FA1, delta slope deposit-FA2 and delta front deposit-FA3) based on lithofacies types, petrography and sedimentary structures. Detailed process-based facies and paleoenvironmental analysis of the study area have led to identification of more than150 m thick coarsening-upwards deltaic successions composed of fine grained turbidites overlain by delta slope deposits. Erosional features are locally common at the base of turbidite beds and still more widespread at the top. The complete sequence has eight sub-divisions that are here termed T1 to T8. The basal subdivision (T1) comprises a massive graded unit with a sharp, scoured base, internal parallel-lamination and cross-lamination. The overlying sequence shows textural and compositional grading through alternating silt and mud laminae (T2). T2 is overlying by T3 which is characterized by climbing ripple and cross lamination. Parallel laminae are the predominant facies attributes of T4 which caps the T3 unit. T5 has a loaded scour base and is mainly characterized laminated silt. The topmost three divisions, graded mud (T6), ungraded mud (T7) and laminated mud (T8). The proposed sequence is analogous to the Bouma (1962) structural scheme for sandy turbidites. Repetition of partial sequences represents deposition from different stages of evolution of a large, muddy, turbidity flow. Detailed facies analysis of the study area reveals that the sediments of the turbidites developed during normal regression at the stage of stable or marginally rising sea level. Thin-bedded turbidites were deposited predominantly by turbidity currents in the relatively shallower part of the Simla basin. The fine-grained turbidites are developed by resedimentation of delta-front sands and slumping of upper pro-delta muds.Keywords: turbidites, prodelta, proterozoic, Simla Basin, Bouma sequence
Procedia PDF Downloads 268117 Television and Virtual Public Sphere: A Study on Malayali Tribes in Salem District, Tamil Nadu
Authors: P. Viduthalai, A. K. Divakar, V. Natarajan
Abstract:
Media is one of the powerful tools that manipulate the world in numerous aspects especially in the form of a communication process. For instance, the concept of the public sphere, which was earlier represented by landlords and elites has now transformed into a virtual public sphere, which is also represented by marginalized people. Unfortunately, this acquisition is still paradoxical. Though the media proliferation and its effects are humongous, still it has not been the same throughout the world. Inequality in access to media has created a technological divide among people. Finally, globalization and approach by the government towards using media for development communication has significantly changed the way in which the media reaches every nook and corner. Monarchy, oligarchy, republic and democracy together form the basis of most governments of the world. Of which, democracy is the one with the highest involvement and participation of the people. Ideally, the participation of the people is what, that keeps the democracy running. A healthy democracy is possible only when people are able to access information that makes citizens responsible and serves to check the functioning of their elected representatives. On one side the media consumption of people plays a crucial role in the formation of the public sphere, and on the other side, big media conglomerates are a serious threat to community participation, which is a goal that the media should strive for in a country like India. How different people consume these different media, differs greatly from length and breadth of the country. Another aspect of this media consumption is that it isn’t passive. People usage and consumption of media are related with the gratification that they derive from the particular media. This aspect varies from person to person and from society to society according to both internal and external factors. This article sets out from the most underlying belief that Malayali Tribes have adopted television and becomes a part of daily life and a day never passes without it especially after the introduction of Free Television Scheme by the past state government. Though they are living in hilly and socially isolated places, they too have started accessing media for understanding about the people of the plains and their culture, dictated by their interest. Many of these interests appear to have a social and psychological origin. The present research attempts to study how gratification of these needs lead Malayali Tribes to form such a virtual public sphere where they could communicate with people of the plains. Data was collected through survey method, from 300 respondents on “Exposure towards Television and their perception”. Conventional anthropological methods like unstructured interviews were also used to supplement the data collection efforts in the three taluks namely Yercaud, Pethanayankkanpalayam and Panamaraththuppatty in Salem district of TamilNadu. The results highlight the role of Television in gratifying needs of the Malayali Tribes.Keywords: democracy, gratification, Malayali Tribes and television, virtual public sphere
Procedia PDF Downloads 253116 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 270115 Thinking for Writing: Evidence of Language Transfer in Chinese ESL Learners’ Written Narratives
Abstract:
English as a second language (ESL) learners are often observed to have transferred traits of their first languages (L1) and habits of using their L1s to their use of English (second language, L2), and this phenomenon is coined as language transfer. In addition to the transfer of linguistic features (e.g., grammar, vocabulary, etc.), which are relatively easy to observe and quantify, many cross-cultural theorists emphasized on a much subtle and fundamental transfer existing on a higher conceptual level that is referred to as conceptual transfer. Although a growing body of literature in linguistics has demonstrated evidence of L1 transfer in various discourse genres, very limited studies address the underlying conceptual transfer that is happening along with the language transfer, especially with the extended form of spontaneous discourses such as personal narrative. To address this issue, this study situates itself in the context of Chinese ESL learners’ written narratives, examines evidence of L1 conceptual transfer in comparison with native English speakers’ narratives, and provides discussion from the perspective of the conceptual transfer. It is hypothesized that Chinese ESL learners’ English narrative strategies are heavily influenced by the strategies that they use in Chinese as a result of the conceptual transfer. Understanding language transfer cognitively is of great significance in the realm of SLA, as it helps address challenges that ESL learners around the world are facing; allow native English speakers to develop a better understanding about how and why learners’ English is different; and also shed light in ESL pedagogy by providing linguistic and cultural expectations in native English-speaking countries. To achieve the goals, 40 college students were recruited (20 Chinese ESL learners and 20 native English speakers) in the United States, and their written narratives on the prompt 'The most frightening experience' were collected for quantitative discourse analysis. 40 written narratives (20 in Chinese and 20 in English) were collected from Chinese ESL learners, and 20 written narratives were collected from native English speakers. All written narratives were coded according to the coding scheme developed by the authors prior to data collection. Statistical descriptive analyses were conducted, and the preliminary results revealed that native English speakers included more narrative elements such as events and explicit evaluation comparing to Chinese ESL students’ both English and Chinese writings; the English group also utilized more evaluation device (i.e., physical state expressions, indirectly reported speeches, delineation) than Chinese ESL students’ both English and Chinese writings. It was also observed that Chinese ESL students included more orientation elements (i.e., the introduction of time/place, the introduction of character) in their Chinese and English writings than the native English-speaking participants. The findings suggest that a similar narrative strategy was observed in Chinese ESL learners’ Chinese narratives and English narratives, which is considered as the evidence of conceptual transfer from Chinese (L1) to English (L2). The results also indicate that distinct narrative strategies were used by Chinese ESL learners and native English speakers as a result of cross-cultural differences.Keywords: Chinese ESL learners, language transfer, thinking-for-speaking, written narratives
Procedia PDF Downloads 118114 The Global Children’s Challenge Program: Pedometer Step Count in an Australian School
Authors: D. Hilton
Abstract:
The importance and significance of this research is based upon the fundamental knowledge reported in the scientific literature that physical activity is inversely associated with obesity. In addition, it is recognized there is a global epidemic of sedentariness while at the same time it is known that morbidity and mortality are associated with physical inactivity and as a result of overweight or obesity. Hence this small study in school students is an important area of research in our community. An application submitted in 2005 for the inaugural Public Health Education Research Trust [PHERT] Post Graduate Research Scholarship scheme organized by the Public Health Association of Australia [PHAA] was awarded 3rd place within Australia. The author and title was: D. Hilton, Methods to increase physical activity in school aged children [literature review, a trial using pedometers and a policy paper]. Third place is a good result, however this did not secure funding for the project, as only first place received $5000 funding. Some years later within Australia, a program commenced called the Global Children's Challenge [GCC]. Given details of the 2005 award above were included an application submission prepared for Parkhill Primary School [PPS] which is located in Victoria, Australia was successful. As a result, an excited combined grade 3/ 4 class at the school [27 students] in 2012 became recipients of these free pedometers. Ambassadors for the program were Mrs Catherine Freeman [OAM], Olympic Gold Medalist – Sydney 2000 [400 meters], while another ambassador was Mr Colin Jackson [CBE] who is a Welsh former sprint and hurdling athlete. In terms of PPS and other schools involved in 2012, website details show that the event started on 19th Sep 2012 and students were to wear the pedometer every day for 50 days [at home and at school] aiming for the recommended 15,000 steps/day recording steps taken in a booklet provided. After the finish, an analysis of the average step count for this school showed that the average steps taken / day was 14, 003 [however only a small percentage of students returned the booklets and units] as unfortunately the dates for the program coincided with school holidays so some students either forgot or misplaced the units / booklets. Unfortunately funding for this program ceased in 2013, however the lasting impact of the trial on student’s knowledge and awareness remains and in fact becomes a good grounding for students in how to monitor basic daily physical activity using a method that is easy, fun, low cost and readily accessible.Keywords: walking, physical activity, exercise, Australian school
Procedia PDF Downloads 309113 A Qualitative Study of Inclusive Growth through Microfinance in India
Authors: Amit Kumar Bardhan, Barnali Nag, Chandra Sekhar Mishra
Abstract:
Microfinance is considered as one of the key drivers of financial inclusion and pro-poor financial growth. Microfinance in India became popular through Self Help Group (SHG) movement initiated by NABARD. In terms of outreach and loan portfolio, SHG Bank Linkage programme (SHG-BLP) has emerged as the largest microfinance initiative in the world. The success of financial inclusion lies in the successful implementation of SHG-BLP. SHGs are generally promoted by social welfare organisations like NGOs, welfare societies, government agencies, Co-operatives etc. and even banks are also involved in SHG formation. Thus, the pro-poor implementation of the scheme largely depends on the credibility of the SHG Promoting Institutions (SHPIs). The rural poor lack education, skills and financial literacy and hence need continuous support and proper training right from planning to implementation. In this study, we have made an attempt to inspect the reasons behind low penetration of SHG financing to the poorest of the poor both from demand and supply side perspective. Banks, SHPIs, and SHGs are three key essential stakeholders in SHG-BLP programmes. All of them have a vital role in programme implementation. The objective of this paper is to find out the drivers and hurdles in the path of financial inclusion through SHG-BLP and the role of SHPIs in reaching out to the ultra poor. We try to address questions like 'what are the challenges faced by SHPIs in targeting the poor?' and, 'what are factors behind the low credit linkage of SHGs?' Our work is based on a qualitative study of SHG programmes in semi-urban towns in the states of West Bengal and Odisha in India. Data are collected through unstructured questionnaire and in-depth interview from the members of SHGs, SHPIs and designated banks. The study provides some valuable insights about the programme and a comprehensive view of problems and challenges faced by SGH, SHPIs, and banks. On the basis of our understanding from the survey, some findings and policy recommendations that seem relevant are: increasing level of non-performing assets (NPA) of commercial banks and wilful default in expectation of loan waiver and subsidy are the prime reasons behind low rate of credit linkage of SHGs. Regular changes in SHG schemes and no incentive for after linkage follow up results in dysfunctional SHGs. Government schemes are mostly focused on creation of SHG and less on livelihood promotion. As a result, in spite of increasing (YoY) trend of number of SHGs promoted, there is no real impact on welfare growth. Government and other SHPIs should focus on resource based SHG promotion rather only increasing the number of SHGs.Keywords: financial inclusion, inclusive growth, microfinance, Self-Help Group (SHG), Self-Help Group Promoting Institution (SHPI)
Procedia PDF Downloads 215112 Biorefinery as Extension to Sugar Mills: Sustainability and Social Upliftment in the Green Economy
Authors: Asfaw Gezae Daful, Mohsen Alimandagari, Kathleen Haigh, Somayeh Farzad, Eugene Van Rensburg, Johann F. Görgens
Abstract:
The sugar industry has to 're-invent' itself to ensure long-term economic survival and opportunities for job creation and enhanced community-level impacts, given increasing pressure from fluctuating and low global sugar prices, increasing energy prices and sustainability demands. We propose biorefineries for re-vitalisation of the sugar industry using low value lignocellulosic biomass (sugarcane bagasse, leaves, and tops) annexed to existing sugar mills, producing a spectrum of high value platform chemicals along with biofuel, bioenergy, and electricity. Opportunity is presented for greener products, to mitigate climate change and overcome economic challenges. Xylose from labile hemicellulose remains largely underutilized and the conversion to value-add products a major challenge. Insight is required on pretreatment and/or extraction to optimize production of cellulosic ethanol together with lactic acid, furfural or biopolymers from sugarcane bagasse, leaves, and tops. Experimental conditions for alkaline and pressurized hot water extraction dilute acid and steam explosion pretreatment of sugarcane bagasse and harvest residues were investigated to serve as a basis for developing various process scenarios under a sugarcane biorefinery scheme. Dilute acid and steam explosion pretreatment were optimized for maximum hemicellulose recovery, combined sugar yield and solids digestibility. An optimal range of conditions for alkaline and liquid hot water extraction of hemicellulosic biopolymers, as well as conditions for acceptable enzymatic digestibility of the solid residue, after such extraction was established. Using data from the above, a series of energy efficient biorefinery scenarios are under development and modeled using Aspen Plus® software, to simulate potential factories to better understand the biorefinery processes and estimate the CAPEX and OPEX, environmental impacts, and overall viability. Rigorous and detailed sustainability assessment methodology was formulated to address all pillars of sustainability. This work is ongoing and to date, models have been developed for some of the processes which can ultimately be combined into biorefinery scenarios. This will allow systematic comparison of a series of biorefinery scenarios to assess the potential to reduce negative impacts on and maximize the benefits of social, economic, and environmental factors on a lifecycle basis.Keywords: biomass, biorefinery, green economy, sustainability
Procedia PDF Downloads 514111 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 233110 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes
Authors: David S. Byrne
Abstract:
The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations
Procedia PDF Downloads 14109 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217108 Examining the European Central Bank's Marginal Attention to Human Rights Concerns during the Eurozone Crisis through the Lens of Organizational Culture
Authors: Hila Levi
Abstract:
Respect for human rights is a fundamental element of the European Union's (EU) identity and law. Surprisingly, however, the protection of human rights has been significantly restricted in the austerity programs ordered by the International Monetary Fund (IMF), the European Central Bank (ECB) and the European Commission (EC) (often labeled 'the Troika') in return for financial aid to the crisis-hit countries. This paper focuses on the role of the ECB in the crisis management. While other international financial institutions, such as the IMF or the World Bank, may opt to neglect human rights obligations, one might expect a greater respect of human rights from the ECB, which is bound by the EU Charter of Fundamental Rights. However, this paper argues that ECB officials made no significant effort to protect human rights or strike an adequate balance between competing financial and human rights needs while coping with the crisis. ECB officials were preoccupied with the need to stabilize the economy and prevent a collapse of the Eurozone, and paid only marginal attention to human rights concerns in the design and implementation of Troikas' programs. This paper explores the role of Organizational Culture (OC) in explaining this marginalization. While International Relations (IR) research on Intergovernmental Organizations (IGOs) behavior has traditionally focused on external interests of powerful member states, and on national and economic considerations, this study focuses on particular institutions' internal factors and independent processes. OC characteristics have been identified in OC literature as an important determinant of organizational behavior. This paper suggests that cultural characteristics are also vital for the examination of IGOs, and particularly for understanding the ECB's behavior during the crisis. In order to assess the OC of the ECB and the impact it had on its policies and decisions during the Eurozone crisis, the paper uses the results of numerous qualitative interviews conducted with high-ranking officials and staff members of the ECB involved in the crisis management. It further reviews primary sources of the ECB (such as ECB statutes, and the Memoranda of Understanding signed between the crisis countries and the Troika), and secondary sources (such as the report of the UN High Commissioner for Human Rights on Austerity measures and economic, social, and cultural rights). It thus analyzes the interaction between the ECBs culture and the almost complete absence of human rights considerations in the Eurozone crisis resolution scheme. This paper highlights the importance and influence of internal ideational factors on IGOs behavior. From a more practical perspective, this paper may contribute to understanding one of the obstacles in the process of human rights implementation in international organizations, and provide instruments for better protection of social and economic rights.Keywords: European central bank, eurozone crisis, intergovernmental organizations, organizational culture
Procedia PDF Downloads 154107 Bilingual Books in British Sign Language and English: The Development of E-Book
Authors: Katherine O'Grady-Bray
Abstract:
For some deaf children, reading books can be a challenge. Frank Barnes School (FBS) provides guided reading time with Teachers of the Deaf, in which they read books with deaf children using a bilingual approach. The vocabulary and context of the story is explained to deaf children in BSL so they develop skills bridging English and BSL languages. However, the success of this practice is only achieved if the person is fluent in both languages. FBS piloted a scheme to convert an Oxford Reading Tree (ORT) book into an e-book that can be read using tablets. Deaf readers at FBS have access to both languages (BSL and English) during lessons and outside the classroom. The pupils receive guided reading sessions with a Teacher of the Deaf every morning, these one to one sessions give pupils the opportunity to learn how to bridge both languages e.g. how to translate English to BSL and vice versa. Generally, due to our pupils’ lack of access to incidental learning, gaining new information about the world around them is limited. This highlights the importance of quality time to scaffold their language development. In some cases, there is a shortfall of parental support at home due to poor communication skills or an unawareness of how to interact with deaf children. Some families have a limited knowledge of sign language or simply don’t have the required learning environment and strategies needed for language development with deaf children. As the majority of our pupils’ preferred language is BSL we use that to teach reading and writing English. If this is not mirrored at home, there is limited opportunity for joint reading sessions. Development of the e-Book required planning and technical development. The overall production took time as video footage needed to be shot and then edited individually for each page. There were various technical considerations such as having an appropriate background colour so not to draw attention away from the signer. Appointing a signer with the required high level of BSL was essential. The language and pace of the sign language was an important consideration as it was required to match the age and reading level of the book. When translating English text to BSL, careful consideration was given to the nonlinear nature of BSL and the differences in language structure and syntax. The e-book was produced using Apple’s ‘iBook Author’ software which allowed video footage of the signer to be embedded on pages opposite the text and illustration. This enabled BSL translation of the content of the text and inferences of the story. An interpreter was used to directly ‘voice over’ the signer rather than the actual text. The aim behind the structure and layout of the e-book is to allow parents to ‘read’ with their deaf child which helps to develop both languages. From observations, the use of e-books has given pupils confidence and motivation with their reading, developing skills bridging both BSL and English languages and more effective reading time with parents.Keywords: bilingual book, e-book, BSL and English, bilingual e-book
Procedia PDF Downloads 169106 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 122105 A Text in Movement in the Totonac Flyers’ Dance: A Performance-Linguistic Theory
Authors: Luisa Villani
Abstract:
The proposal aims to express concerns about the connection between mind, body, society, and environment in the Flyers’ dance, a very well-known rotatory dance in Mexico, to create meanings and to make the apprehension of the world possible. The interaction among the brain, mind, body, and environment, and the intersubjective relation among them, means the world creates and recreates a social interaction. The purpose of this methodology, based on the embodied cognition theory, which was named “A Performance-Embodied Theory” is to find the principles and patterns that organize the culture and the rules of the apprehension of the environment by Totonac people while the dance is being performed. The analysis started by questioning how anthropologists can interpret how Totonacs transform their unconscious knowledge into conscious knowledge and how the scheme formation of imagination and their collective imagery is understood in the context of public-facing rituals, such as Flyers’ dance. The problem is that most of the time, researchers interpret elements in a separate way and not as a complex ritual dancing whole, which is the original contribution of this study. This theory, which accepts the fact that people are body-mind agents, wants to interpret the dance as a whole, where the different elements are joined to an integral interpretation. To understand incorporation, data was recollected in prolonged periods of fieldwork, with participant observation and linguistic and extralinguistic data analysis. Laban’s notation for the description and analysis of gestures and movements in the space was first used, but it was later transformed and gone beyond this method, which is still a linear and compositional one. Performance in a ritual is the actualization of a potential complex of meanings or cognitive domains among many others in a culture: one potential dimension becomes probable and then real because of the activation of specific meanings in a context. It can only be thought what language permits thinking, and the lexicon that is used depends on the individual culture. Only some parts of this knowledge can be activated at once, and these parts of knowledge are connected. Only in this way, the world can be understood. It can be recognized that as languages geometrize the physical world thanks to the body, also ritual does. In conclusion, the ritual behaves as an embodied grammar or a text in movement, which, depending on the ritual phases and the words and sentences pronounced in the ritual, activates bits of encyclopedic knowledge that people have about the world. Gestures are not given by the performer but emerge from the intentional perception in which gestures are “understood” by the audio-spectator in an inter-corporeal way. The impact of this study regards the possibility not only to disseminate knowledge effectively but also to generate a balance between different parts of the world where knowledge is shared, rather than being received by academic institutions alone. This knowledge can be exchanged, so indigenous communities and academies could be together as part of the activation and the sharing of this knowledge with the world.Keywords: dance, flyers, performance, embodied, cognition
Procedia PDF Downloads 58104 Natural Monopolies and Their Regulation in Georgia
Authors: Marina Chavleishvili
Abstract:
Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.Keywords: monopolies, natural monopolies, regulation, antimonopoly service
Procedia PDF Downloads 86103 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates
Authors: Jiro Takagi
Abstract:
Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses
Procedia PDF Downloads 226102 Congruency of English Teachers’ Assessments Vis-à-Vis 21st Century Skills Assessment Standards
Authors: Mary Jane Suarez
Abstract:
A massive educational overhaul has taken place at the onset of the 21st century addressing the mismatches of employability skills with that of scholastic skills taught in schools. For a community to thrive in an ever-developing economy, the teaching of the necessary skills for job competencies should be realized by every educational institution. However, in harnessing 21st-century skills amongst learners, teachers, who often lack familiarity and thorough insights into the emerging 21st-century skills, are chained with the restraint of the need to comprehend the physiognomies of 21st-century skills learning and the requisite to implement the tenets of 21st-century skills teaching. With the endeavor to espouse 21st-century skills learning and teaching, a United States-based national coalition called Partnership 21st Century Skills (P21) has identified the four most important skills in 21st-century learning: critical thinking, communication, collaboration, and creativity and innovation with an established framework for 21st-century skills standards. Assessment of skills is the lifeblood of every teaching and learning encounter. It is correspondingly crucial to look at the 21st century standards and the assessment guides recognized by P21 to ensure that learners are 21st century ready. This mixed-method study sought to discover and describe what classroom assessments were used by English teachers in a public secondary school in the Philippines with course offerings on science, technology, engineering, and mathematics (STEM). The research evaluated the assessment tools implemented by English teachers and how these assessment tools were congruent to the 21st assessment standards of P21. A convergent parallel design was used to analyze assessment tools and practices in four phases. In the data-gathering phase, survey questionnaires, document reviews, interviews, and classroom observations were used to gather quantitative and qualitative data simultaneously, and how assessment tools and practices were consistent with the P21 framework with the four Cs as its foci. In the analysis phase, the data were treated using mean, frequency, and percentage. In the merging and interpretation phases, a side-by-side comparison was used to identify convergent and divergent aspects of the results. In conclusion, the results yielded assessments tools and practices that were inconsistent, if not at all, used by teachers. Findings showed that there were inconsistencies in implementing authentic assessments, there was a scarcity of using a rubric to critically assess 21st skills in both language and literature subjects, there were incongruencies in using portfolio and self-reflective assessments, there was an exclusion of intercultural aspects in assessing the four Cs and the lack of integrating collaboration in formative and summative assessments. As a recommendation, a harmonized assessment scheme of P21 skills was fashioned for teachers to plan, implement, and monitor classroom assessments of 21st-century skills, ensuring the alignment of such assessments to P21 standards for the furtherance of the institution’s thrust to effectively integrate 21st-century skills assessment standards to its curricula.Keywords: 21st-century skills, 21st-century skills assessments, assessment standards, congruency, four Cs
Procedia PDF Downloads 192101 Glasshouse Experiment to Improve Phytomanagement Solutions for Cu-Polluted Mine Soils
Authors: Marc Romero-Estonllo, Judith Ramos-Castro, Yaiza San Miguel, Beatriz Rodríguez-Garrido, Carmela Monterroso
Abstract:
Mining activity is among the main sources of trace and heavy metal(loid) pollution worldwide, which is a hazard to human and environmental health. That is why several projects have been emerging for the remediation of such polluted places. Phytomanagement strategies draw good performances besides big side benefits. In this work, a glasshouse assay with trace element polluted soils from an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE Project (SOE1/P5/E0189)) was set. The objective was to evaluate improvements induced by the following phytoremediation-related treatments. Three increasingly complex amendments alone or together with plant growth (Populus nigra L. alone and together with Tripholium repens L.) were tested. And three different rhizosphere bioinocula were applied (Plant Growth Promoting Bacteria (PGP), mycorrhiza (MYC), or mixed (PGP+MYC)). After 110 days of growth, plants were collected, biomass was weighed, and tree length was measured. Physical-chemical analyses were carried out to determine pH, effective Cation Exchange Capacity, carbon and nitrogen contents, bioavailable phosphorous (Olsen bicarbonate method), pseudo total element content (microwave acid digested fraction), EDTA extractable metals (complexed fraction), and NH4NO3 extractable metals (easily bioavailable fraction). On plant material, nitrogen content and acid digestion elements were determined. Amendment usage, plant growth, and bioinoculation were demonstrated to improve soil fertility and/or plant health within the time span of this study. Particularly, pH levels increased from 3 (highly acidic) to 5 (acidic) in the worst-case scenario, even reaching 7 (neutrality) in the best plots. Organic matter and pH increments were related to polluting metals’ bioavailability decrements. Plants grew better both with the most complex amendment and the middle one, with few differences due to bioinoculation. Using the less complex amendment (just compost) beneficial effects of bioinoculants were more observable, although plants didn’t thrive very well. On unamended soils, plants neither sprouted nor bloomed. The scheme assayed in this study is suitable for phytomanagement of these kinds of soils affected by mining activity. These findings should be tested now on a larger scale.Keywords: aided phytoremediation, mine pollution, phytostabilization, soil pollution, trace elements
Procedia PDF Downloads 66100 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis
Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli
Abstract:
This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE
Procedia PDF Downloads 10599 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 22498 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X
Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira
Abstract:
An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.Keywords: boundary-layer, scramjet, simple algorithm, shock wave
Procedia PDF Downloads 49097 Development of the Integrated Quality Management System of Cooked Sausage Products
Authors: Liubov Lutsyshyn, Yaroslava Zhukova
Abstract:
Over the past twenty years, there has been a drastic change in the mode of nutrition in many countries which has been reflected in the development of new products, production techniques, and has also led to the expansion of sales markets for food products. Studies have shown that solution of the food safety problems is almost impossible without the active and systematic activity of organizations directly involved in the production, storage and sale of food products, as well as without management of end-to-end traceability and exchange of information. The aim of this research is development of the integrated system of the quality management and safety assurance based on the principles of HACCP, traceability and system approach with creation of an algorithm for the identification and monitoring of parameters of technological process of manufacture of cooked sausage products. Methodology of implementation of the integrated system based on the principles of HACCP, traceability and system approach during the manufacturing of cooked sausage products for effective provision for the defined properties of the finished product has been developed. As a result of the research evaluation technique and criteria of performance of the implementation and operation of the system of the quality management and safety assurance based on the principles of HACCP have been developed and substantiated. In the paper regularities of influence of the application of HACCP principles, traceability and system approach on parameters of quality and safety of the finished product have been revealed. In the study regularities in identification of critical control points have been determined. The algorithm of functioning of the integrated system of the quality management and safety assurance has also been described and key requirements for the development of software allowing the prediction of properties of finished product, as well as the timely correction of the technological process and traceability of manufacturing flows have been defined. Based on the obtained results typical scheme of the integrated system of the quality management and safety assurance based on HACCP principles with the elements of end-to-end traceability and system approach for manufacture of cooked sausage products has been developed. As a result of the studies quantitative criteria for evaluation of performance of the system of the quality management and safety assurance have been developed. A set of guidance documents for the implementation and evaluation of the integrated system based on the HACCP principles in meat processing plants have also been developed. On the basis of the research the effectiveness of application of continuous monitoring of the manufacturing process during the control on the identified critical control points have been revealed. The optimal number of critical control points in relation to the manufacture of cooked sausage products has been substantiated. The main results of the research have been appraised during 2013-2014 under the conditions of seven enterprises of the meat processing industry and have been implemented at JSC «Kyiv meat processing plant».Keywords: cooked sausage products, HACCP, quality management, safety assurance
Procedia PDF Downloads 24796 The Effect of Zeolite and Fertilizers on Yield and Qualitative Characteristics of Cabbage in the Southeast of Kazakhstan
Authors: Tursunay Vassilina, Aigerim Shibikeyeva, Adilet Sakhbek
Abstract:
Research has been carried out to study the influence of modified zeolite fertilizers on the quantitative and qualitative indicators of cabbage variety Nezhenka. The use of zeolite and mineral fertilizers had a positive effect on both the yield and quality indicators of the studied crop. The maximum increase in yield from fertilizers was 16.5 t/ha. Application of both zeolite and fertilizer increased the dry matter, sugar and vitamin C content of cabbage heads. It was established that the cabbage contains an amount of nitrates that is safe for human health. Among vegetable crops, cabbage has both food and feed value. One of the limiting factors in the sale of vegetable crops is the degradation of soil fertility due to depletion of nutrient reserves and erosion processes, and non-compliance with fertilizer application technologies. Natural zeolites are used as additives to mineral fertilizers for application in the field, which makes it possible to reduce their doses to minimal quantities. Zeolites improve the agrophysical and agrochemical properties of the soil and the quality of plant products. The research was carried out in a field experiment, carried out in 3 repetitions, on dark chestnut soil in 2023. The soil (pH = 7.2-7.3) of the experimental plot is dark chestnut, the humus content in the arable layer is 2.15%, gross nitrogen 0.098%, phosphorus, potassium 0.225 and 2.4%, respectively. The object of the study was the late cabbage variety Nezhenka. Scheme for applying fertilizers to cabbage: 1. Control (without fertilizers); 2. Zeolite 2t/ha; 3. N45P45K45; 4. N90P90K90; 5. Zeolite, 2 t/ha + N45P45K45; 6. Zeolite, 2 t/ha + N90P90K90. Yield accounting was carried out on a plot-by-plot basis manually. In plant samples, the following was determined: dry matter content by thermostatic method (at 105ºC); sugar content by Bertrand titration method, nitrate content by 1% diphenylamine solution, vitamin C by titrimetric method with acid solution. According to the results, it was established that the yield of cabbage was high – 42.2 t/ha in the treatment Zeolite, 2 t/ha + N90P90K90. When determining the biochemical composition of white cabbage, it was found that the dry matter content was 9.5% and increased with fertilized treatments. The total sugar content increased slightly with the use of zeolite (5.1%) and modified zeolite fertilizer (5.5%), the vitamin C content ranged from 17.5 to 18.16%, while in the control, it was 17.21%. The amount of nitrates in products also increased with increasing doses of nitrogen fertilizers and decreased with the use of zeolite and modified zeolite fertilizer but did not exceed the maximum permissible concentration. Based on the research conducted, it can be concluded that the application of zeolite and fertilizers leads to a significant increase in yield compared to the unfertilized treatment; contribute to the production of cabbage with good and high quality indicators.Keywords: cabbage, dry matter, nitrates, total sugar, yield, vitamin C
Procedia PDF Downloads 7295 Estimating Evapotranspiration Irrigated Maize in Brazil Using a Hybrid Modelling Approach and Satellite Image Inputs
Authors: Ivo Zution Goncalves, Christopher M. U. Neale, Hiran Medeiros, Everardo Mantovani, Natalia Souza
Abstract:
Multispectral and thermal infrared imagery from satellite sensors coupled with climate and soil datasets were used to estimate evapotranspiration and biomass in center pivots planted to maize in Brazil during the 2016 season. The hybrid remote sensing based model named Spatial EvapoTranspiration Modelling Interface (SETMI) was applied using multispectral and thermal infrared imagery from the Landsat Thematic Mapper instrument. Field data collected by the IRRIGER center pivot management company included daily weather information such as maximum and minimum temperature, precipitation, relative humidity for estimating reference evapotranspiration. In addition, soil water content data were obtained every 0.20 m in the soil profile down to 0.60 m depth throughout the season. Early season soil samples were used to obtain water-holding capacity, wilting point, saturated hydraulic conductivity, initial volumetric soil water content, layer thickness, and saturated volumetric water content. Crop canopy development parameters and irrigation application depths were also inputs of the model. The modeling approach is based on the reflectance-based crop coefficient approach contained within the SETMI hybrid ET model using relationships developed in Nebraska. The model was applied to several fields located in Minas Gerais State in Brazil with approximate latitude: -16.630434 and longitude: -47.192876. The model provides estimates of real crop evapotranspiration (ET), crop irrigation requirements and all soil water balance outputs, including biomass estimation using multi-temporal satellite image inputs. An interpolation scheme based on the growing degree-day concept was used to model the periods between satellite inputs, filling the gaps between image dates and obtaining daily data. Actual and accumulated ET, accumulated cold temperature and water stress and crop water requirements estimated by the model were compared with data measured at the experimental fields. Results indicate that the SETMI modeling approach using data assimilation, showed reliable daily ET and crop water requirements for maize, interpolated between remote sensing observations, confirming the applicability of the SETMI model using new relationships developed in Nebraska for estimating mainly ET and water requirements in Brazil under tropical conditions.Keywords: basal crop coefficient, irrigation, remote sensing, SETMI
Procedia PDF Downloads 14094 Synthesis, Molecular Modeling and Study of 2-Substituted-4-(Benzo[D][1,3]Dioxol-5-Yl)-6-Phenylpyridazin-3(2H)-One Derivatives as Potential Analgesic and Anti-Inflammatory Agents
Authors: Jyoti Singh, Ranju Bansal
Abstract:
Fighting pain and inflammation is a common problem faced by physicians while dealing with a wide variety of diseases. Since ancient time nonsteroidal anti-inflammatory agents (NSAIDs) and opioids have been the cornerstone of treatment therapy, however, the usefulness of both these classes is limited due to severe side effects. NSAIDs, which are mainly used to treat mild to moderate inflammatory pain, induce gastric irritation and nephrotoxicity whereas opioids show an array of adverse reactions such as respiratory depression, sedation, and constipation. Moreover, repeated administration of these drugs induces tolerance to the analgesic effects and physical dependence. Further discovery of selective COX-2 inhibitors (coxibs) suggested safety without any ulcerogenic side effects; however, long-term use of these drugs resulted in kidney and hepatic toxicity along with an increased risk of secondary cardiovascular effects. The basic approaches towards inflammation and pain treatment are constantly changing, and researchers are continuously trying to develop safer and effective anti-inflammatory drug candidates for the treatment of different inflammatory conditions such as osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, psoriasis and multiple sclerosis. Synthetic 3(2H)-pyridazinones constitute an important scaffold for drug discovery. Structure-activity relationship studies on pyridazinones have shown that attachment of a lactam at N-2 of the pyridazinone ring through a methylene spacer results in significantly increased anti-inflammatory and analgesic properties of the derivatives. Further introduction of the heterocyclic ring at lactam nitrogen results in improvement of biological activities. Keeping in mind these SAR studies, a new series of compounds were synthesized as shown in scheme 1 and investigated for anti-inflammatory, analgesic, anti-platelet activities and docking studies. The structures of newly synthesized compounds have been established by various spectroscopic techniques. All the synthesized pyridazinone derivatives exhibited potent anti-inflammatory and analgesic activity. Homoveratryl substituted derivative was found to possess highest anti-inflammatory and analgesic activity displaying 73.60 % inhibition of edema at 40 mg/kg with no ulcerogenic activity when compared to standard drugs indomethacin. Moreover, 2-substituted-4-benzo[d][1,3]dioxole-6-phenylpyridazin-3(2H)-ones derivatives did not produce significant changes in bleeding time and emerged as safe agents. Molecular docking studies also illustrated good binding interactions at the active site of the cyclooxygenase-2 (hCox-2) enzyme.Keywords: anti-inflammatory, analgesic, pyridazin-3(2H)-one, selective COX-2 inhibitors
Procedia PDF Downloads 20093 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography
Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner
Abstract:
Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.Keywords: CBCT, C-arm, reconstruction, trajectory optimization
Procedia PDF Downloads 13292 Cultural Heritage, Urban Planning and the Smart City in Indian Context
Authors: Paritosh Goel
Abstract:
The conservation of historic buildings and historic Centre’s over recent years has become fully encompassed in the planning of built-up areas and their management following climate changes. The approach of the world of restoration, in the Indian context on integrated urban regeneration and its strategic potential for a smarter, more sustainable and socially inclusive urban development introduces, for urban transformations in general (historical centers and otherwise), the theme of sustainability. From this viewpoint, it envisages, as a primary objective, a real “green, ecological or environmental” requalification of the city through interventions within the main categories of sustainability: mobility, energy efficiency, use of sources of renewable energy, urban metabolism (waste, water, territory, etc.) and natural environment. With this the concept of a “resilient city” is also introduced, which can adapt through progressive transformations to situations of change which may not be predictable, behavior that the historical city has always been able to express. Urban planning on the other hand, has increasingly focused on analyses oriented towards the taxonomic description of social/economic and perceptive parameters. It is connected with human behavior, mobility and the characterization of the consumption of resources, in terms of quantity even before quality to inform the city design process, which for ancient fabrics, and mainly affects the public space also in its social dimension. An exact definition of the term “smart city” is still essentially elusive, since we can attribute three dimensions to the term: a) That of a virtual city, evolved based on digital networks and web networks b) That of a physical construction determined by urban planning based on infrastructural innovation, which in the case of historic Centre’s implies regeneration that stimulates and sometimes changes the existing fabric; c) That of a political and social/economic project guided by a dynamic process that provides new behavior and requirements of the city communities that orients the future planning of cities also through participation in their management. This paper is a preliminary research into the connections between these three dimensions applied to the specific case of the fabric of ancient cities with the aim of obtaining a scientific theory and methodology to apply to the regeneration of Indian historical Centre’s. The Smart city scheme if contextualize with heritage of the city it can be an initiative which intends to provide a transdisciplinary approach between various research networks (natural sciences, socio-economics sciences and humanities, technological disciplines, digital infrastructures) which are united in order to improve the design, livability and understanding of urban environment and high historical/cultural performance levels.Keywords: historical cities regeneration, sustainable restoration, urban planning, smart cities, cultural heritage development strategies
Procedia PDF Downloads 28191 Budgetary Performance Model for Managing Pavement Maintenance
Authors: Vivek Hokam, Vishrut Landge
Abstract:
An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.Keywords: budget, maintenance, deterioration, priority
Procedia PDF Downloads 207