Search results for: real estate prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7268

Search results for: real estate prediction

998 In the Conundrum between Tradition and Modernity: A Socio-Cultural Study to Understand Crib Death in Malda, West Bengal

Authors: Prama Mukhopadhyay, Rishika Mukhopadhyay

Abstract:

The twentieth century has seen the world getting divided into three distinct blocks, created by the proponents of the mainstream developmental discourse. India, which has now gained the label of being a ‘developing nation’, stands in between these three groups, as it constantly tries to ‘catch up’ and emulate the developmental standards of the ‘west’. In this endeavour, we find our country trying really hard to blindly replicate the health care infrastructures of the ‘first worlds’, without realizing the needs of evaluating the ground reality. In such a situation, the sudden outbreak of child death in the district of Malda, WB, poses an obvious questions towards the kind of development that our country has been engaging in, ever since its Post Colonial inception. Through this paper we thus try to understand the harsh veracity of the health care facility that exists in rural Bengal, and thereby challenge the conventional notion of ‘health-care’ as is normally discussed in the mainstream developmental discourse. Grounding our research work on detailed ethnography and through the help of questionnaire, interviews and focus group discussions with the local government officials(BDOs), health workers (ICDS, ASHA workers, ANHM and BMOHs) and members of families with experiences of child deaths, we have tried to find out the real and humane factors behind the sudden rise of reported infant deaths in the district, issues which are normally neglected and left out while discussing and evaluating IMR in the mainstream studies on health care and planning in our nation. Therefore the main aim of this paper is to try and look at child death from a ‘wider perspective’, where it is seen from an eye not bounded by the common registers of caste, class and religion. This paper, would thus be an eye opener in some sense, bringing in stories from the rural belt of the country; where the people are regularly torn between the binaries of the developing and shining modernity of ‘India’ which now gets ready to run the last lap and gain the status of becoming a ‘developed nation’ by 2020, and the staggering, dark traditional ‘ Bharat, which lags behind.

Keywords: child mortality, development discourse, health care, tradition and modernity

Procedia PDF Downloads 391
997 Synthesis of Fluorescent PET-Type “Turn-Off” Triazolyl Coumarin Based Chemosensors for the Sensitive and Selective Sensing of Fe⁺³ Ions in Aqueous Solutions

Authors: Aidan Battison, Neliswa Mama

Abstract:

Environmental pollution by ionic species has been identified as one of the biggest challenges to the sustainable development of communities. The widespread use of organic and inorganic chemical products and the release of toxic chemical species from industrial waste have resulted in a need for advanced monitoring technologies for environment protection, remediation and restoration. Some of the disadvantages of conventional sensing methods include expensive instrumentation, well-controlled experimental conditions, time-consuming procedures and sometimes complicated sample preparation. On the contrary, the development of fluorescent chemosensors for biological and environmental detection of metal ions has attracted a great deal of attention due to their simplicity, high selectivity, eidetic recognition, rapid response and real-life monitoring. Coumarin derivatives S1 and S2 (Scheme 1) containing 1,2,3-triazole moieties at position -3- have been designed and synthesized from azide and alkyne derivatives by CuAAC “click” reactions for the detection of metal ions. These compounds displayed a strong preference for Fe3+ ions with complexation resulting in fluorescent quenching through photo-induced electron transfer (PET) by the “sphere of action” static quenching model. The tested metal ions included Cd2+, Pb2+, Ag+, Na+, Ca2+, Cr3+, Fe3+, Al3+, Cd2+, Ba2+, Cu2+, Co2+, Hg2+, Zn2+ and Ni2+. The detection limits of S1 and S2 were determined to be 4.1 and 5.1 uM, respectively. Compound S1 displayed the greatest selectivity towards Fe3+ in the presence of competing for metal cations. S1 could also be used for the detection of Fe3+ in a mixture of CH3CN/H¬2¬O. Binding stoichiometry between S1 and Fe3+ was determined by using both Jobs-plot and Benesi-Hildebrand analysis. The binding was shown to occur in a 1:1 ratio between the sensor and a metal cation. Reversibility studies between S1 and Fe3+ were conducted by using EDTA. The binding site of Fe3+ to S1 was determined by using 13 C NMR and Molecular Modelling studies. Complexation was suggested to occur between the lone-pair of electrons from the coumarin-carbonyl and the triazole-carbon double bond.

Keywords: chemosensor, "click" chemistry, coumarin, fluorescence, static quenching, triazole

Procedia PDF Downloads 163
996 Malate Dehydrogenase Enabled ZnO Nanowires as an Optical Tool for Malic Acid Detection in Horticultural Products

Authors: Rana Tabassum, Ravi Kant, Banshi D. Gupta

Abstract:

Malic acid is an extensively distributed organic acid in numerous horticultural products in minute amounts which significantly contributes towards taste determination by balancing sugar and acid fractions. An enhanced concentration of malic acid is utilized as an indicator of fruit maturity. In addition, malic acid is also a crucial constituent of several cosmetics and pharmaceutical products. An efficient detection and quantification protocol for malic acid is thus highly demanded. In this study, we report a novel detection scheme for malic acid by synergistically collaborating fiber optic surface plasmon resonance (FOSPR) and distinctive features of nanomaterials favorable for sensing applications. The design blueprint involves the deposition of an assembly of malate dehydrogenase enzyme entrapped in ZnO nanowires forming the sensing route over silver coated central unclad core region of an optical fiber. The formation and subsequent decomposition of the enzyme-analyte complex on exposure of the sensing layer to malic acid solutions of diverse concentration results in modification of the dielectric function of the sensing layer which is manifested in terms of shift in resonance wavelength. Optimization of experimental variables such as enzyme concentration entrapped in ZnO nanowires, dip time of probe for deposition of sensing layer and working pH range of the sensing probe have been accomplished through SPR measurements. The optimized sensing probe displays high sensitivity, broad working range and a minimum limit of detection value and has been successfully tested for malic acid determination in real samples of fruit juices. The current work presents a novel perspective towards malic acid determination as the unique and cooperative combination of FOSPR and nanomaterials provides myriad advantages such as enhanced sensitivity, specificity, compactness together with the possibility of online monitoring and remote sensing.

Keywords: surface plasmon resonance, optical fiber, sensor, malic acid

Procedia PDF Downloads 380
995 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 161
994 Ergonomic Adaptations in Visually Impaired Workers - A Literature Review

Authors: Kamila Troper, Pedro Mestre, Maria Lurdes Menano, Joana Mendonça, Maria João Costa, Sandra Demel

Abstract:

Introduction: Visual impairment is a problem that has an influence on hundreds of thousands of people all over the world. Although it is possible for a Visually Impaired person to do most jobs, the right training, technological assistance, and emotional support are essential. Ergonomics be able to solve many of the problems/issues with the relative ease of positioning, lighting and design of the workplace. A little forethought can make a tremendous difference to the ease with which a person with an impairment function. Objectives: Review the main ergonomic adaptation measures reported in the literature in order to promote better working conditions and safety measures for the visually impaired. Methodology: This was an exploratory-descriptive, qualitative literature systematic review study. The main databases used were: PubMed, BIREME, LILACS, with articles and studies published between 2000 and 2021. Results: Based on the principles of the theoretical references of ergonomic analysis of work, the main restructuring of the physical space of the workstations were: Accessibility facilities and assistive technologies; A screen reader that captures information from a computer and sends it in real-time to a speech synthesizer or Braille terminal; Installations of software with voice recognition, Monitors with enlarged screens; Magnification software; Adequate lighting, magnifying lenses in addition to recommendations regarding signage and clearance of the places where the visually impaired pass through. Conclusions: Employability rates for people with visual impairments(both those who are blind and those who have low vision)are low and continue to be a concern to the world and for researchers as a topic of international interest. Although numerous authors have identified barriers to employment and proposed strategies to remediate or circumvent those barriers, people with visual impairments continue to experience high rates of unemployment.

Keywords: ergonomic adaptations, visual impairments, ergonomic analysis of work, systematic review

Procedia PDF Downloads 182
993 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 74
992 Representation of History in Cinema: Comparative Analysis of Turkish Films Based on the Conquest of Istanbul

Authors: Dilara Balcı Gulpinar

Abstract:

History, which can be defined as the narrative of the past, is a process of reproduction that takes place in current time. Scientificness of historiography is controversial for reasons such as the fact that the historian makes choices and comments; even the reason for choosing the subject distracts him/her from objectivity. Historians may take advantage of the current values, cannot be able to afford to contradict society and/or face pressures of dominant groups. In addition, due to the lack of documentation, interpretation, and fiction are used to integrate historical events that seem disconnected. In this respect, there are views that relate history to narrative arts rather than positive sciences. Popular historical films, which are visual historical representations, appeal to wider audiences by taking advantage of visuality, dramatic fictional narrative, various effects, music, stars, and other populist elements. Historical film, which does not claim to be scientific and even has the freedom to distort historical reality, can be perceived as reality itself and becomes an indispensable resource for individual and social memory. The ideological discourse of popular films is not only impressive and manipulative but also changeable. Socio-cultural and political changes can transform the representation of history in films extremely sharply and rapidly. In accordance with the above-mentioned hypothesis, this study is aimed at examining Turkish historical films about the conquest of Istanbul, using methods of historical and social analysis. İstanbul’un Fethi (Conquest of Istanbul, Aydin Arakon, 1953), Kuşatma Altında Aşk (Love Under Siege, Ersin Pertan, 1997) and Fetih 1453 (Conquest 1453, Faruk Aksoy, 2012) are the only three films in Turkish cinema that revolve around the said conquest, therefore constituting the sample of this study. It has been determined that real and fictional events, as well as characters, both focused and ignored, differ from one another in each film. Such significant differences in the dramatic and cinematographic structure of these three films shot respectively in the 50s, 90s, and 2010s show that the representation of history in popular cinema has altered throughout the years, losing its aspect of objectivity.

Keywords: cinema, conquest of Istanbul, historical film, representation

Procedia PDF Downloads 135
991 Combining Corpus Linguistics and Critical Discourse Analysis to Study Power Relations in Hindi Newspapers

Authors: Vandana Mishra, Niladri Sekhar Dash, Jayshree Charkraborty

Abstract:

This present paper focuses on the application of corpus linguistics techniques for critical discourse analysis (CDA) of Hindi newspapers. While Corpus linguistics is the study of language as expressed in corpora (samples) of 'real world' text, CDA is an interdisciplinary approach to the study of discourse that views language as a form of social practice. CDA has mainly been studied from a qualitative perspective. However, we can say that recent studies have begun combining corpus linguistics with CDA in analyzing large volumes of text for the study of existing power relations in society. The corpus under our study is also of a sizable amount (1 million words of Hindi newspaper texts) and its analysis requires an alternative analytical procedure. So, we have combined both the quantitative approach i.e. the use of corpus techniques with CDA’s traditional qualitative analysis. In this context, we have focused on the Keyword Analysis Sorting Concordance Lines of the selected Keywords and calculating collocates of the keywords. We have made use of the Wordsmith Tool for all these analysis. The analysis starts with identifying the keywords in the political news corpus when compared with the main news corpus. The keywords are extracted from the corpus based on their keyness calculated through statistical tests like chi-squared test and log-likelihood test on the frequent words of the corpus. Some of the top occurring keywords are मोदी (Modi), भाजपा (BJP), कांग्रेस (Congress), सरकार (Government) and पार्टी (Political party). This is followed by the concordance analysis of these keywords which generates thousands of lines but we have to select few lines and examine them based on our objective. We have also calculated the collocates of the keywords based on their Mutual Information (MI) score. Both concordance and collocation help to identify lexical patterns in the political texts. Finally, all these quantitative results derived from the corpus techniques will be subjectively interpreted in accordance to the CDA’s theory to examine the ways in which political news discourse produces social and political inequality, power abuse or domination.

Keywords: critical discourse analysis, corpus linguistics, Hindi newspapers, power relations

Procedia PDF Downloads 224
990 The Evaluation of the Re-Construction Project Hamamönü, Ankara in Turkey as a Case from Socio-Cultural Perspective

Authors: Tuğçe Kök, Gözen Güner Aktaş, Nur Ayalp

Abstract:

In a global world, Social and cultural sustainability are subjects which have gained significant importance in recent years. The concept of sustainability was included in the document of the World Conservation Union (IUCN) by World Charter for Nature, adopted in 1982 for the first time. However, merged with urban sustainability a new phenomenon has emerged. Sustainability is an essential fact, This fact is discussed via the socio-cultural field of sustainability. Together with central government and local authorities, conservation activities have been intensified on the protection of values on an area scale. Today, local authorities play an important role in the urban historic site rehabilitation and re-construction of traditional houses projects in Ankara, Turkey. Many conservative acts have occurred after 1980’s. To give a remarkable example about the conservation implementations of traditional Turkish houses is ‘Hamamönü, Ankara Re-Construction Project which is one of the historical parts that has suffered from deterioration and unplanned urban development. In this region, preexisting but unused historic fibre of the site has been revised and according to result of this case-study, the relationship between users and re-construction were discussed. Most of the houses were re-constructed in order to build a new tourist attraction area. This study discusses the socio-cultural relations between the new built environment and the visitors, from the point of cultural sustainability. This study questions the transmission of cultural stimulations. A case study was conducted to discuss the perception of cultural aspects of the visitors in the site. The relationship between the real cultural identities and existent ones after the re-constructed project, Which has been transmitted through the visitors and the users of those spaces will be discussed. The aim of the study is to analyze the relation between the cultural identities, which have been tried to be protected with the re-construction project and the users. The purposes of this study are to evaluate the implementations of Altındağ Municipality in Hamamönü and examine the socio-cultural sustainability with the user responses. After the assessment of implementation under socio-cultural sustainability, some proposals for the future of Hamamönü were introduced.

Keywords: social sustainability, cultural sustainability, Hamamönü, Turkey, re-construction

Procedia PDF Downloads 479
989 Psychometric Properties of Several New Positive Psychology Measures

Authors: Lauren Benyo Linford, Jared Warren, Jeremy Bekker, Gus Salazar

Abstract:

In order to accurately identify areas needing improvement and track growth, the availability of valid and reliable measures of different facets of well-being is vital. Because no specific measures currently exist for many facets of well-being, the purpose of this study was to construct and validate measures of the following constructs: Purpose, Values, Mindfulness, Savoring, Gratitude, Optimism, Supportive Relationships, Interconnectedness, Compassion, Community, Contribution, Engaged Living, Personal Growth, Flow Experiences, Self-Compassion, Exercise, Meditation, and an overall measure of subjective well-being—the Survey on Flourishing. In order to assess their psychometric properties, each measure was examined for internal consistency estimates, and items with poor item-test correlations were dropped. Additionally, the convergent validity of the Survey on Flourishing (SURF) was assessed. Total score correlations of SURF and other commonly used measures of well-being such as the Positive and Negative Affect Schedule (PANAS), The Satisfaction with Life Scale (SWLS), the PERMA Profiler (measure of Positive Emotion, Engagement, Relationships, Meaning, and Achievement) were examined to establish convergent validity. The Kessler Psychological distress scale (K6) was also included to determine the divergent validity of the SURF measure. Three week test-retest reliability was also assessed for the SURF measure. Additionally, normative data from general population samples was collected for both the Self-Compassion and Survey on Flourishing (SURF) measures. The purpose of this study is to introduce each of these measures, divulge the psychometric findings of this study, as well as explore additional psychometric properties of the SURF measure in particular. This study will highlight how these measures can be used in future research exploring these positive psychology constructs. Additionally, this study will discuss the utility of these measures to guide individuals in their use of the online self-directed, self-administered My Best Self 101 positive psychology resources developed by the researchers. The goal of My Best Self 101 is to disseminate real, research-based measures and tools to individuals who are seeking to increase their well-being.

Keywords: measurement, psychometrics, test validation, well-Being

Procedia PDF Downloads 188
988 Immune Modulation and Cytomegalovirus Reactivation in Sepsis-Induced Immunosuppression

Authors: G. Lambe, D. Mansukhani, A. Shetty, S. Khodaiji, C. Rodrigues, F. Kapadia

Abstract:

Introduction: Sepsis is known to cause impairment of both innate and adaptive immunity and involves an early uncontrolled inflammatory response, followed by a protracting immunosuppression phase, which includes decreased expression of cell receptors, T cell anergy and exhaustion, impaired cytokine production, which may cause high risk for secondary infections due to reduced response to antigens. Although human cytomegalovirus (CMV) is widely recognized as a serious viral pathogen in sepsis and immunocompromised patients, the incidence of CMV reactivation in patients with sepsis lacking strong evidence of immunosuppression is not well defined. Therefore, it is important to determine an association between CMV reactivation and sepsis-induced immunosuppression. Aim: To determine the association between incidence of CMV reactivation and immune modulation in sepsis-induced immunosuppression with time. Material and Methods: Ten CMV-seropositive adult patients with severe sepsis were included in this study. Blood samples were collected on Day 0, and further weekly up to 21 days. CMV load was quantified by real-time PCR using plasma. The expression of immunosuppression markers, namely, HLA-DR, PD-1, and regulatory T cells, were determined by flow cytometry using whole blood. Results: At Day 0, no CMV reactivation was observed in 6/10 patients. In these patients, the median length for reactivation was 14 days (range, 7-14 days). The remaining four patients, at Day 0, had a mean viral load of 1802+2599 copies/ml, which increased with time. At Day 21, the mean viral load for all 10 patients was 60949+179700 copies/ml, indicating that viremia increased with the length of stay in the hospital. HLA-DR expression on monocytes significantly increased from Day 0 to Day 7 (p = 0.001), following which no significant change was observed until Day 21, for all patients except 3. In these three patients, HLA-DR expression on monocytes showed a decrease at elevated viral load (>5000 copies/ml), indicating immune suppression. However, the other markers, PD-1 and regulatory T cells, did not show any significant changes. Conclusion: These preliminary findings suggest that CMV reactivation can occur in patients with severe sepsis. In fact, the viral load continued to increase with the length of stay in the hospital. Immune suppression, indicated by decreased expression of HLA-DR alone, was observed in three patients with elevated viral load.

Keywords: CMV reactivation, immune suppression, sepsis immune modulation, CMV viral load

Procedia PDF Downloads 150
987 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction

Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord

Abstract:

Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.

Keywords: automated, manual-handling, risk-assessment, machine-based

Procedia PDF Downloads 119
986 Development of an Integrated System for the Treatment of Rural Domestic Wastewater: Emphasis on Nutrient Removal

Authors: Prangya Ranjan Rout, Puspendu Bhunia, Rajesh Roshan Dash

Abstract:

In a developing country like India, providing reliable and affordable wastewater treatment facilities in rural areas is a huge challenge. With the aim of enhancing the nutrient removal from rural domestic wastewater while reducing the cost of treatment process, a novel, integrated treatment system consisting of a multistage bio-filter with drop aeration and a post positioned attached growth carbonaceous denitrifying-bioreactor was designed and developed in this work. The bio-filter was packed with ‘dolochar’, a sponge iron industry waste, as an adsorbent mainly for phosphate removal through physiochemical approach. The Denitrifying bio-reactor was packed with many waste organic solid substances (WOSS) as carbon sources and substrates for biomass attachment, mainly to remove nitrate in biological denitrification process. The performance of the modular system, treating real domestic wastewater was monitored for a period of about 60 days and the average removal efficiencies during the period were as follows: phosphate, 97.37%; nitrate, 85.91%, ammonia, 87.85%, with mean final effluent concentration of 0.73, 9.86, and 9.46 mg/L, respectively. The multistage bio-filter played an important role in ammonium oxidation and phosphate adsorption. The multilevel drop aeration with increasing oxygenation, and the special media used, consisting of certain oxides were likely beneficial for nitrification and phosphorus removal, respectively, whereas the nitrate was effectively reduced by biological denitrification in the carbonaceous bioreactor. This treatment system would allow multipurpose reuse of the final effluent. Moreover, the saturated dolochar can be used as nutrient suppliers in agricultural practices and the partially degraded carbonaceous substances can be subjected to composting, and subsequently used as an organic fertilizer. Thus, the system displays immense potential for treating domestic wastewater significantly decreasing the concentrations of nutrients and more importantly, facilitating the conversion of the waste materials into usable ones.

Keywords: nutrient removal, denitrifying bioreactor, multi-stage bio-filter, dolochar, waste organic solid substances

Procedia PDF Downloads 381
985 Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems

Authors: Thomas Meier

Abstract:

One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.

Keywords: Internet of Things, smart building, device interoperability, device integration, smart home

Procedia PDF Downloads 271
984 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 153
983 Optimizing Fermented Paper Production Using Spyrogira sp. Interpolating with Banana Pulp

Authors: Hadiatullah, T. S. D. Desak Ketut, A. A. Ayu, A. N. Isna, D. P. Ririn

Abstract:

Spirogyra sp. is genus of microalgae which has a high carbohydrate content that used as a best medium for bacterial fermentation to produce cellulose. This study objective to determine the effect of pulp banana in the fermented paper production process using Spirogyra sp. and characterizing of the paper product. The method includes the production of bacterial cellulose, assay of the effect fermented paper interpolating with banana pulp using Spirogyra sp., and the assay of paper characteristics include gram-mage paper, water assay absorption, thickness, power assay of tensile resistance, assay of tear resistance, density, and organoleptic assay. Experiments were carried out with completely randomized design with a variation of the concentration of sewage treatment in the fermented paper production interpolating banana pulp using Spirogyra sp. Each parameter data to be analyzed by Anova variance that continued by real difference test with an error rate of 5% using the SPSS. Nata production results indicate that different carbon sources (glucose and sugar) did not show any significant differences from cellulose parameters assay. Significantly different results only indicated for the control treatment. Although not significantly different from the addition of a carbon source, sugar showed higher potency to produce high cellulose. Based on characteristic assay of the fermented paper showed that the paper gram-mage indicated that the control treatment without interpolation of a carbon source and a banana pulp have better result than banana pulp interpolation. Results of control gram-mage is 260 gsm that show optimized by cardboard. While on paper gram-mage produced with the banana pulp interpolation is about 120-200 gsm that show optimized by magazine paper and art paper. Based on the density, weight, water absorption assays, and organoleptic assay of paper showing the highest results in the treatment of pulp banana interpolation with sugar source as carbon is 14.28 g/m2, 0.02 g and 0.041 g/cm2.minutes. The conclusion found that paper with nata material interpolating with sugar and banana pulp has the potential formulation to produce super-quality paper.

Keywords: cellulose, fermentation, grammage, paper, Spirogyra sp.

Procedia PDF Downloads 333
982 Proposal for a Framework for Teaching Entrepreneurship and Innovation Using the Methods and Current Methodologies

Authors: Marcelo T. Okano, Jaqueline C. Bueno, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi

Abstract:

Developing countries are increasingly finding that entrepreneurship and innovation are the ways to speed up their developments and initiate or encourage technological development. The educational institutions such as universities, colleges and colleges of technology, has two main roles in this process, to guide and train entrepreneurs and provide technological knowledge and encourage innovation. Thus there was completing the triple helix model of innovation with universities, government and industry. But the teaching of entrepreneurship and innovation can not be only the traditional model, with blackboard, chalk and classroom. The new methods and methodologies such as Canvas, elevator pitching, design thinking, etc. require students to get involved and to experience the simulations of business, expressing their ideas and discussing them. The objective of this research project is to identify the main methods and methodologies used for the teaching of entrepreneurship and innovation, to propose a framework, test it and make a case study. To achieve the objective of this research, firstly was a survey of the literature on the entrepreneurship and innovation, business modeling, business planning, Canvas business model, design thinking and other subjects about the themes. Secondly, we developed the framework for teaching entrepreneurship and innovation based on bibliographic research. Thirdly, we tested the framework in a higher education class IT management for a semester. Finally, we detail the results in the case study in a course of IT management. As important results we improve the level of understanding and business administration students, allowing them to manage own affairs. Methods such as canvas and business plan helped students to plan and shape the ideas and business. Pitching for entrepreneurs and investors in the market brought a reality for students. The prototype allowed the company groups develop their projects. The proposed framework allows entrepreneurship education and innovation can leave the classroom, bring the reality of business roundtables to university relying on investors and real entrepreneurs.

Keywords: entrepreneurship, innovation, Canvas, traditional model

Procedia PDF Downloads 576
981 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 72
980 Addressing Stigma on the Child and Adolescent Psychiatry Consultation Service Through Use of Video

Authors: Rachel Talbot, Nasuh Malas

Abstract:

Stigma in child and adolescent psychiatry continues to be a significant barrier for youth to receive much needed psychiatric care. Parents misperceptions regarding mental health may interfere with their child’s care and negatively influence their child’s view of mental health. For some children, their first experience with psychiatry may occur during medical hospitalization when they are seen by the Psychiatry Consultation-Liaison (C/L) Service. Despite this unique role, there is limited data on how to address mental health stigma with patients and families within the context of Child and Adolescent C/L Psychiatry. This study explores the use of a brief introductory video with messages from the psychiatry C/L team, families who have accessed mental health consultation in the hospital, as well as clips of family and C/L team interactions to address parental stigma of psychiatry. Common stigmatized concerns shared by parents include concerns about confidentiality, later ramifications of mental healthcare, outsider status, and parental self-blame. There are also stigmatized concerns about psychiatric medication use including overmedication, sedation, long-term effects, medicating ‘real problems’ and personality blunting. Each of these are addressed during the video parents will see with the intent of reducing negative parental perceptions relating to mental healthcare. For this study, families are given a survey highlighting these concerns, prior to and after watching the video. Pre-and post-video responses are compared with the hypothesis that watching the video will effectively reduce parental stigma about psychiatric care. Data collection is currently underway and will be completed by the end of November 2017 with data analysis completed by January 2018. This study will also give vital information about the demographic differences in perceptions of stigma so future interventions can be targeted towards those with higher perceived stigma. This study posits that use of an introductory video is an effective strategy to combat stigma and help educate and empower families. In this way, we will be reducing further barriers for patients and families to seek out mental health resources and supports that are often desperately needed for these youths.

Keywords: child and adolescent psychiatry, consult-liaison psychiatry, media, stigma

Procedia PDF Downloads 192
979 Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique

Authors: Serkan Solmaz, Jean-Baptiste Gouriet, Nicolas Van de Wyer, Christophe Schram

Abstract:

Efficiency of the cooling process for cryogenic propellant boiling in engine cooling channels on space applications is relentlessly affected by the phase change occurs during the boiling. The effectiveness of the cooling process strongly pertains to the type of the boiling regime such as nucleate and film. Geometric constraints like a non-transparent cooling channel unable to use any of visualization methods. The ultrasonic (US) technique as a non-destructive method (NDT) has therefore been applied almost in every engineering field for different purposes. Basically, the discontinuities emerge between mediums like boundaries among different phases. The sound wave emitted by the US transducer is both transmitted and reflected through a gas-liquid interface which makes able to detect different phases. Due to the thermal and structural concerns, it is impractical to sustain a direct contact between the US transducer and working fluid. Hence the transducer should be located outside of the cooling channel which results in additional interfaces and creates ambiguities on the applicability of the present method. In this work, an exploratory research is prompted so as to determine detection ability and applicability of the US technique on the cryogenic boiling process for a cooling cycle where the US transducer is taken place outside of the channel. Boiling of the cryogenics is a complex phenomenon which mainly brings several hindrances for experimental protocol because of thermal properties. Thus substitute materials are purposefully selected based on such parameters to simplify experiments. Aside from that, nucleate and film boiling regimes emerging during the boiling process are simply simulated using non-deformable stainless steel balls, air-bubble injection apparatuses and air clearances instead of conducting a real-time boiling process. A versatile detection algorithm is perennially developed concerning exploratory studies afterward. According to the algorithm developed, the phases can be distinguished 99% as no-phase, air-bubble, and air-film presences. The results show the detection ability and applicability of the US technique for an exploratory purpose.

Keywords: Ultrasound, ultrasonic, multiphase flow, boiling, cryogenics, detection algorithm

Procedia PDF Downloads 170
978 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures

Authors: Nicky Wilson, Graeme Ralph

Abstract:

Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.

Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories

Procedia PDF Downloads 78
977 Experimental Investigation of the Out-of-Plane Dynamic Behavior of Adhesively Bonded Composite Joints at High Strain Rates

Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Ben Yahia

Abstract:

In this investigation, an experimental technique in which the dynamic response, damage kinetic and heat dissipation are measured simultaneously during high strain rates on adhesively bonded joints materials. The material used in this study is widely used in the design of structures for military applications. It was composed of a 45° Bi-axial fiber-glass mat of 0.286 mm thickness in a Polyester resin matrix. In adhesive bonding, a NORPOL Polyvinylester of 1 mm thickness was used to assemble the composite substrate. The experimental setup consists of a compression Split Hopkinson Pressure Bar (SHPB), a high-speed infrared camera and a high-speed Fastcam rapid camera. For the dynamic compression tests, 13 mm x 13 mm x 9 mm samples for out-of-plane tests were considered from 372 to 1030 s-1. Specimen surface is controlled and monitored in situ and in real time using the high-speed camera which acquires the damage progressive in specimens and with the infrared camera which provides thermal images in time sequence. Preliminary compressive stress-strain vs. strain rates data obtained show that the dynamic material strength increases with increasing strain rates. Damage investigations have revealed that the failure mainly occurred in the adhesive/adherent interface because of the brittle nature of the polymeric adhesive. Results have shown the dependency of the dynamic parameters on strain rates. Significant temperature rise was observed in dynamic compression tests. Experimental results show that the temperature change depending on the strain rate and the damage mode and their maximum exceed 100 °C. The dependence of these results on strain rate indicates that there exists a strong correlation between damage rate sensitivity and heat dissipation, which might be useful when developing damage models under dynamic loading tacking into account the effect of the energy balance of adhesively bonded joints.

Keywords: adhesive bonded joints, Hopkinson bars, out-of-plane tests, dynamic compression properties, damage mechanisms, heat dissipation

Procedia PDF Downloads 212
976 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher

Abstract:

The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with a burgers vector. The twinning mechanism of plastic strain is not observed in Ti-6Al-4V, therefore, it is not considered in the present modeling. Nine representative elementary volumes (REV) are generated with Voronoi tessellations. For each individual equiaxed grain, the own crystallographic orientation vis-à-vis the loading is taken into account. The meshing strategy is optimized in a way to eliminate the meshing effects and at the same time to allow calculating the individual grain size. The stress and strain fields are determined in each Gauss point of the mesh element. A post-treatment is used to calculate the local behavior (in each grain) and then by appropriate homogenization, the macroscopic behavior is calculated. The developed model is validated by comparing the numerical simulation results with an experimental data reported in the literature. It is observed that the present model is able to predict the global mechanical behavior of Ti-6Al-4V alloy and investigate the microstructural parameters' effects. According to the simulations performed on the generated volumes (REV), the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior. The average grains size influences also the Ti-6Al-4V mechanical proprieties, especially the yield stress; by decreasing of the average grains size, the yield strength increases according to Hall-Petch relationship. The grains sizes' distribution gives to the strain fields considerable heterogeneity. By increasing grain sizes, the scattering in the localization of plastic strain is observed, thus, in certain areas the stress concentrations are stronger than other regions.

Keywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy

Procedia PDF Downloads 126
975 Absolute Quantification of the Bexsero Vaccine Component Factor H Binding Protein (fHbp) by Selected Reaction Monitoring: The Contribution of Mass Spectrometry in Vaccinology

Authors: Massimiliano Biagini, Marco Spinsanti, Gabriella De Angelis, Sara Tomei, Ilaria Ferlenghi, Maria Scarselli, Alessia Biolchi, Alessandro Muzzi, Brunella Brunelli, Silvana Savino, Marzia M. Giuliani, Isabel Delany, Paolo Costantino, Rino Rappuoli, Vega Masignani, Nathalie Norais

Abstract:

The gram-negative bacterium Neisseria meningitidis serogroup B (MenB) is an exclusively human pathogen representing the major cause of meningitides and severe sepsis in infants and children but also in young adults. This pathogen is usually present in the 30% of healthy population that act as a reservoir, spreading it through saliva and respiratory fluids during coughing, sneezing, kissing. Among surface-exposed protein components of this diplococcus, factor H binding protein is a lipoprotein proved to be a protective antigen used as a component of the recently licensed Bexsero vaccine. fHbp is a highly variable meningococcal protein: to reflect its remarkable sequence variability, it has been classified in three variants (or two subfamilies), and with poor cross-protection among the different variants. Furthermore, the level of fHbp expression varies significantly among strains, and this has also been considered an important factor for predicting MenB strain susceptibility to anti-fHbp antisera. Different methods have been used to assess fHbp expression on meningococcal strains, however, all these methods use anti-fHbp antibodies, and for this reason, the results are affected by the different affinity that antibodies can have to different antigenic variants. To overcome the limitations of an antibody-based quantification, we developed a quantitative Mass Spectrometry (MS) approach. Selected Reaction Monitoring (SRM) recently emerged as a powerful MS tool for detecting and quantifying proteins in complex mixtures. SRM is based on the targeted detection of ProteoTypicPeptides (PTPs), which are unique signatures of a protein that can be easily detected and quantified by MS. This approach, proven to be highly sensitive, quantitatively accurate and highly reproducible, was used to quantify the absolute amount of fHbp antigen in total extracts derived from 105 clinical isolates, evenly distributed among the three main variant groups and selected to be representative of the fHbp circulating subvariants around the world. We extended the study at the genetic level investigating the correlation between the differential level of expression and polymorphisms present within the genes and their promoter sequences. The implications of fHbp expression on the susceptibility of the strain to killing by anti-fHbp antisera are also presented. To date this is the first comprehensive fHbp expression profiling in a large panel of Neisseria meningitidis clinical isolates driven by an antibody-independent MS-based methodology, opening the door to new applications in vaccine coverage prediction and reinforcing the molecular understanding of released vaccines.

Keywords: quantitative mass spectrometry, Neisseria meningitidis, vaccines, bexsero, molecular epidemiology

Procedia PDF Downloads 312
974 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 76
973 Integrating Technology into Foreign Language Teaching: A Closer Look at Arabic Language Instruction at the Australian National University

Authors: Kinda Alsamara

Abstract:

Foreign language education is a complex endeavor that often presents educators with a range of challenges and difficulties. This study shed light on the specific challenges encountered in the context of teaching Arabic as a foreign language at the Australian National University (ANU). Drawing from real-world experiences and insights, we explore the multifaceted nature of these challenges and discuss strategies that educators have employed to address them. The challenges in teaching the Arabic language encompass various dimensions, including linguistic intricacies, cultural nuances, and diverse learner backgrounds. The complex Arabic script, grammatical structures, and pronunciation patterns pose unique obstacles for learners. Moreover, the cultural context embedded within the language demands a nuanced understanding of cultural norms and practices. The diverse backgrounds of learners further contribute to the challenge of tailoring instruction to meet individual needs and proficiency levels. This study also underscores the importance of technology in tackling these challenges. Technological tools and platforms offer innovative solutions to enhance language acquisition and engagement. Online resources, interactive applications, and multimedia content can provide learners with immersive experiences, aiding in overcoming barriers posed by traditional teaching methods. Furthermore, this study addresses the role of instructors in mitigating challenges. Educators often find themselves adapting teaching approaches to accommodate different learning styles, abilities, and motivations. Establishing a supportive learning environment and fostering a sense of community can contribute significantly to overcoming challenges related to learner diversity. In conclusion, this study provides a comprehensive overview of the challenges faced in teaching Arabic as a foreign language at ANU. By recognizing these challenges and embracing technological and pedagogical advancements, educators can create more effective and engaging learning experiences for students pursuing Arabic language proficiency.

Keywords: Arabic, Arabic online, blended learning, teaching and learning, Arabic language, educational aids, technology

Procedia PDF Downloads 63
972 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 133
971 Bayesian Networks Scoping the Climate Change Impact on Winter Wheat Freezing Injury Disasters in Hebei Province, China

Authors: Xiping Wang,Shuran Yao, Liqin Dai

Abstract:

Many studies report the winter is getting warmer and the minimum air temperature is obviously rising as the important climate warming evidences. The exacerbated air temperature fluctuation tending to bring more severe weather variation is another important consequence of recent climate change which induced more disasters to crop growth in quite a certain regions. Hebei Province is an important winter wheat growing province in North of China that recently endures more winter freezing injury influencing the local winter wheat crop management. A winter wheat freezing injury assessment Bayesian Network framework was established for the objectives of estimating, assessing and predicting winter wheat freezing disasters in Hebei Province. In this framework, the freezing disasters was classified as three severity degrees (SI) among all the three types of freezing, i.e., freezing caused by severe cold in anytime in the winter, long extremely cold duration in the winter and freeze-after-thaw in early season after winter. The factors influencing winter wheat freezing SI include time of freezing occurrence, growth status of seedlings, soil moisture, winter wheat variety, the longitude of target region and, the most variable climate factors. The climate factors included in this framework are daily mean and range of air temperature, extreme minimum temperature and number of days during a severe cold weather process, the number of days with the temperature lower than the critical temperature values, accumulated negative temperature in a potential freezing event. The Bayesian Network model was evaluated using actual weather data and crop records at selected sites in Hebei Province using real data. With the multi-stage influences from the various factors, the forecast and assessment of the event-based target variables, freezing injury occurrence and its damage to winter wheat production, were shown better scoped by Bayesian Network model.

Keywords: bayesian networks, climatic change, freezing Injury, winter wheat

Procedia PDF Downloads 408
970 Trade Policy Incentives and Economic Growth in Nigeria

Authors: Emmanuel Dele Balogun

Abstract:

This paper analyzes, using descriptive statistics and econometrics data which span the period 1981 to 2014 to gauge the effects of trade policy incentives on economic growth in Nigeria. It argues that the provided incentives penalize economic growth during pre-trade liberalization eras, but stimulated a rapid increase in total factor productivity during the post-liberalization period of 2000 to 2014. The trend analysis shows that Nigeria maintained high tariff walls in economic regulation eras which became low in post liberalization era. The protections were in favor of infant industries, which were mainly appendages of multinationals but against imports of competing food and finished consumer products. The trade openness index confirms the undue exposure of Nigeria’s economy to the vagaries of international market shocks; while banking sector recapitalization and new listing of telecommunications companies deepened the financial markets in post-liberalization era. The structure of economic incentives was biased in favor of construction, trade and services, but against the real sector despite protectionist policies. Total Factor Productivity (TFP) estimates show that the Nigerian economy suffered stagnation in pre-liberalization eras, but experienced rapid growth rates in post-liberalization eras. The regression results relating trade policy incentives to TFP growth rate yielded a significant but negative intercept suggesting that a non-interventionist policy could be detrimental to economic progress, while protective tariff which limits imports of competing products could spur productivity gains in domestic import substitutes beyond factor growth with market liberalization. The main constraint to the effectiveness of trade policy incentives is the failure of benefiting industries to leverage on the domestic factor endowments of the nation. This paper concludes that there is the need to review the current economic transformation strategies urgently with a view to provide policymakers with a better understanding of the most viable options that could make for rapid success.

Keywords: economic growth, macroeconomic incentives, total factor productivity, trade policies

Procedia PDF Downloads 322
969 The MicroRNA-2110 Suppressed Cell Proliferation and Migration Capacity in Hepatocellular Carcinoma Cells

Authors: Pelin Balcik Ercin

Abstract:

Introduction: ZEB transcription factor family member ZEB2, has a role in epithelial to mesenchymal transition during development and metastasis. The altered circulating extracellular miRNAs expression is observed in diseases, and extracellular miRNAs have an important role in cancer cell microenvironment. In ChIP-Seq study, the expression of miR-2110 was found to be regulated by ZEB2. In this study, the effects of miR2110 on cell proliferation and migration of hepatocellular carcinoma (HCC) cells were examined. Material and Methods: SNU398 cells transfected with mimic miR2110 (20nM) (HMI0375, Sigma-Aldrich) and negative control miR (HMC0002, Sigma-Aldrich). MicroRNA isolation was accomplished with miRVANA isolation kit according to manufacturer instructions. cDNA synthesis was performed expression, respectively, and calibrated with Ct of controls. The real-time quantitative PCR (RT-qPCR) reaction was performed using the TaqMan Fast Advanced Master Mix (Thermo Sci.). Ct values of miR2110 were normalized to miR-186-5p and miR16-5p for the intracellular gene. Cell proliferation analysis was analyzed with the xCELLigence RTCA System. Wound healing assay was analyzed with the ImageJ program and relative fold change calculated. Results: The mimic-miR-2110 transfected SNU398 cells nearly nine-fold (log2) more miR-2110 expressed compared to negative control transfected cells. The mimic-miR-2110 transfected HCC cell proliferation significantly inhibited compared to the negative control cells. Furthermore, miR-2110-SNU398 cell migration capacity was relatively four-fold decreased compared to negative control-miR-SNU398 cells. Conclusion: Our results suggest the miR-2110 inhibited cell proliferation and also miR-2110 negatively affect cell migration compared to control groups in HCC cells. These data suggest the complexity of microRNA EMT transcription factors regulation. These initial results are pointed out the predictive biomarker capacity of miR-2110 in HCC.

Keywords: epithelial to mesenchymal transition, EMT, hepatocellular carcinoma cells, micro-RNA-2110, ZEB2

Procedia PDF Downloads 125