Search results for: insulated concrete form
660 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 513659 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India
Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar
Abstract:
The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose
Procedia PDF Downloads 257658 Modulation of Receptor-Activation Due to Hydrogen Bond Formation
Authors: Sourav Ray, Christoph Stein, Marcus Weber
Abstract:
A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation
Procedia PDF Downloads 175657 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 190656 An Exploration of the Experiences of Women in Polygamous Marriages: A Case Study of Matizha Village, Masvingo, Zimbabwe
Authors: Flora Takayindisa, Tsoaledi Thobejane, Thizwilondi Mudau
Abstract:
This study highlights what people in polygamous marriages face on a daily basis. It argues that there are more disadvantages for women in polygamous marriages than their counterparts in monogamous relationships. The study further suggests that the patriarchal power structure seems to take a powerful and effective role on polygamous marriages in our societies, particularly in Zimbabwe where this study took place. The study explored the intricacies of polygamous marriages and how these dominances can be resolved. The research is therefore presented through the ‘lived realities’ of the affected women in polygamous marriages in Gutu District located in Masvingo Province of Zimbabwe. Polygamous marriages are practised in different societies. Some women who are practising a polygamous lifestyle are emotionally and physically abused in their relationships. Evidence also suggests children from polygamous marriages also suffer psychologically when their fathers take other wives. Relationships within the family are very difficult because of the husband’s seeming favouritism for one wife. Children are mostly affected by disputes between co-wives and they often lack quality time with their fathers. There are mixed feelings about polygamous marriages. There are some people who condemn it saying inhumane. However, considerations must be made of what it might mean to other people who do not have choices of any other form of marriage. The other factor that has to be noted is that polygamous marriages are not always negative. There are some positive things that result from the polygamous marriages. The study was conducted in a village called Matizha. In the study, a qualitative research approach was employed to stimulate awareness of the social, cultural, religious and the effect of economic factors in polygamous marriages. This approach facilitates a unique understanding of the experiences of women in polygamous marriages, their experiences being both negative and positive. The qualitative type of research method enabled the respondents to have an open minded when they were being asked questions. The researcher utilised the feminist theory in the study. The researcher employed guide interviews to acquire information from the participants. The chapter focuses on the participants who took part in the study, how the participants were selected, ethical considerations, data collection, the interview process, the research instruments and the summary. The data was obtained using a guided interview for all the respondents ranging from all ages who are in polygamous marriages. The researcher presented the demographic information of the participants. Thereafter, the researcher presented other aspects of the data collection like social factors, economic factors and also religious affiliation. The conclusions and recommendations are made from the four main themes that came up from the discussions. The recommendations were made for the women, the policies and laws affecting women, and finally, recommendations for future research. It is believed that the overall objectives of the study have been met and research questions have been answered based on the findings of the study discussed.Keywords: co-wives, egalitarianism, experiences, polyandry, polygamy, woman
Procedia PDF Downloads 262655 Attitudes of Nursing Students Towards Caring Nurse-Patient Interaction
Authors: Şefika Dilek Güven, Gülden Küçükakça
Abstract:
Objective: Learning the process of interaction with patient occurs within the process of nursing education. For this reason, it is considered to provide an opportunity for questioning and rearrangement of nursing education programs by assessing attitudes of nursing students towards caring nurse-patient interaction. Method: This is a descriptive study conducted in order to assess attitudes of nursing students towards caring nurse-patient interaction. The study was conducted with 318 students who were studying at nursing department of Semra and Vefa Küçük Health High School, Nevşehir Hacı Bektaş Veli University in 2015-2016 academic year and agreed to participate in the study. “Personal Information Form” prepared by the researchers utilizing the literature and “Caring Nurse-Patient Interaction Scale (CNPIS)”, who Turkish validity and reliability were conducted by Atar and Aştı, were used in the study. The Cronbach α coefficient of CNPIS was found as 0.973 in the study. Permissions of the institution and participants were received before starting to conduct study. Significance test of the difference between two means, analysis of variance, and correlation analysis were used to assess the data. Results: Average age of nursing students participating in the study was 20.72±1.91 and 74.8% were female, and 28.0% were the fourth-year students. 52.5% of the nursing students stated that they chose nursing profession willingly, 80.2% did not have difficulty in their interactions with patients, and 84.6% did not have difficulty in their social relationships. CNPIS total mean score of nursing students was found to be 295.31±40.95. When the correlation between total CNPIS mean score of the nursing students in terms of some variables was examined; it was determined there was a significant positive correlation between ages of the nursing students and total mean score of CNPIS (r=0.184, p=0.001). CNPIS total mean score was found to be higher in female students compared to male students, in 3rd–year students compared to students studying at other years, in those choosing their profession willingly compared to those choosing their profession unwillingly, in those not having difficulty in relations with the patients compared to those having difficulty, and in those not having difficulty in social relationships compared to those having difficulty. It was determined there was a significant difference between CNPIS total mean scores in terms of the year and state of having difficulty in social relationships (p<0,005). Conclusion: Nursing students had positive attitudes towards caring nurse-patient interactions, attitudes of nursing students, who were female, studying at 3rd year, chose nursing profession willingly, did not have difficulty in patient relations, and did not have difficulty in social relationships, towards caring nurse-patient interaction were found to be more positive. In the line with these results; it can be recommended to organize activities for introducing nursing profession to the youth preparing for the university, to use methods that will increase further communication skills to nursing students during their education, to support students in terms of communication skills, and to involve activities that will strengthen their social relationships.Keywords: nurse-patient interaction, nursing student, patient, communication
Procedia PDF Downloads 223654 The Relationship between Basic Human Needs and Opportunity Based on Social Progress Index
Authors: Ebru Ozgur Guler, Huseyin Guler, Sera Sanli
Abstract:
Social Progress Index (SPI) whose fundamentals have been thrown in the World Economy Forum is an index which aims to form a systematic basis for guiding strategy for inclusive growth which requires achieving both economic and social progress. In this research, it has been aimed to determine the relations among “Basic Human Needs” (BHN) (including four variables of ‘Nutrition and Basic Medical Care’, ‘Water and Sanitation’, ‘Shelter’ and ‘Personal Safety’) and “Opportunity” (OPT) (that is composed of ‘Personal Rights’, ‘Personal Freedom and Choice’, ‘Tolerance and Inclusion’, and ‘Access to Advanced Education’ components) dimensions of 2016 SPI for 138 countries which take place in the website of Social Progress Imperative by carrying out canonical correlation analysis (CCA) which is a data reduction technique that operates in a way to maximize the correlation between two variable sets. In the interpretation of results, the first pair of canonical variates pointing to the highest canonical correlation has been taken into account. The first canonical correlation coefficient has been found as 0.880 indicating to the high relationship between BHN and OPT variable sets. Wilk’s Lambda statistic has revealed that an overall effect of 0.809 is highly large for the full model in order to be counted as statistically significant (with a p-value of 0.000). According to the standardized canonical coefficients, the largest contribution to BHN set of variables has come from ‘shelter’ variable. The most effective variable in OPT set has been detected to be ‘access to advanced education’. Findings based on canonical loadings have also confirmed these results with respect to the contributions to the first canonical variates. When canonical cross loadings (structure coefficients) are examined, for the first pair of canonical variates, the largest contributions have been provided by ‘shelter’ and ‘access to advanced education’ variables. Since the signs for structure coefficients have been found to be negative for all variables; all OPT set of variables are positively related to all of the BHN set of variables. In case canonical communality coefficients which are the sum of the squares of structure coefficients across all interpretable functions are taken as the basis; amongst all variables, ‘personal rights’ and ‘tolerance and inclusion’ variables can be said not to be useful in the model with 0.318721 and 0.341722 coefficients respectively. On the other hand, while redundancy index for BHN set has been found to be 0.615; OPT set has a lower redundancy index with 0.475. High redundancy implies high ability for predictability. The proportion of the total variation in BHN set of variables that is explained by all of the opposite canonical variates has been calculated as 63% and finally, the proportion of the total variation in OPT set that is explained by all of the canonical variables in BHN set has been determined as 50.4% and a large part of this proportion belongs to the first pair. The results suggest that there is a high and statistically significant relationship between BHN and OPT. This relationship is generally accounted by ‘shelter’ and ‘access to advanced education’.Keywords: canonical communality coefficient, canonical correlation analysis, redundancy index, social progress index
Procedia PDF Downloads 218653 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing
Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang
Abstract:
Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture
Procedia PDF Downloads 141652 Environmental and Formal Conditions for the Development of Blue-green Infrastructure (BGI) in the Cities of Central Europe on the Example of Poland
Authors: Magdalena Biela, Marta Weber-Siwirska, Edyta Sierka
Abstract:
The current noticed trend in Central European countries, as in other regions of the world, is for people to migrate to cities. As a result, the urban population is to have reached 70% of the total by 2050. Due to this tendency, as well as taking high real estate prices and limited reserves of city green areas into consideration, the greenery and agricultural soil adjacent to cities is are to be devoted to housing projects, while city centres are expected to undergo partial depopulation. Urban heat islands and phenomena such as torrential rains may cause serious damage. They may even endanger the very life and health of the inhabitants. Due to these tangible effects of climate change, residents expect that local government takes action to develop green infrastructure (GI). The main purpose of our research has been to assess the degree of readiness on the part of the local government in Poland to develop BGI. A questionnaire using the CAWI method was prepared, and a survey was carried out. The target group were town hall employees in all 380 powiat cities and towns (380 county centres) in Poland. The form contained 14 questions covering, among others, actions taken to support the development of GI and ways of motivating residents to take such actions. 224 respondents replied to the questions. The results of the research show that 52% of the cities/towns have taken or intend to take measures to favour the development of green spaces. Currently, the installation of green roofs and living walls is are only carried out by 6 Polish cities, and a few more are at the stage of preparing appropriate regulations. The problem of rainwater retention is much more widespread. Among the municipalities declaring any activities for the benefit of GI, approximately 42% have decided to work on this problem. Over 19% of the respondents are planning an increase in the surface occupied by green areas, 14% - the installation of green roofs, and 12% - redevelopment of city greenery. It is optimistic that 67% of the respondents are willing to acquire knowledge about BGI by means of taking part in educational activities both at the national and international levels. There are many ways to help GI development. The most common type of support in the cities and towns surveyed is co-financing (35%), followed by full financing of projects (11%). About 15% of the cities declare only advisory support. Thus, the problem of GI in Central European cities is at the stage of initial development and requires advanced measures and implementation of both proven solutions applied in other European and world countries using the concept of Nature-based Solutions.Keywords: city/town, blue-green infrastructure, green roofs, climate change adaptation
Procedia PDF Downloads 212651 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language
Authors: Zhicheng Han
Abstract:
This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.Keywords: formulaic language, working memory, phonological short-term memory, academic language
Procedia PDF Downloads 63650 Honneth, Feenberg, and the Redemption of Critical Theory of Technology
Authors: David Schafer
Abstract:
Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.Keywords: Habermas, Honneth, technology, Feenberg
Procedia PDF Downloads 198649 Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Sources
Authors: Annisa Ulfah Pristya, Andi Setiawan
Abstract:
Electricity is the primary requirement today's world, including Indonesia. This is because electricity is a source of electrical energy that is flexible to use. Fossil energy sources are the major energy source that is used as a source of energy power plants. Unfortunately, this conversion process impacts on the depletion of fossil fuel reserves and causes an increase in the amount of CO2 in the atmosphere, disrupting health, ozone depletion, and the greenhouse effect. Solutions have been applied are solar cells, ocean wave power, the wind, water, and so forth. However, low efficiency and complicated treatment led to most people and industry in Indonesia still using fossil fuels. Referring to this Fuel Cell was developed. Fuel Cells are electrochemical technology that continuously converts chemical energy into electrical energy for the fuel and oxidizer are the efficiency is considerably higher than the previous natural source of electrical energy, which is 40-60%. However, Fuel Cells still have some weaknesses in terms of the use of an expensive platinum catalyst which is limited and not environmentally friendly. Because of it, required the simultaneous source of electrical energy and environmentally friendly. On the other hand, Indonesia is a rich country in marine sediments and organic content that is never exhausted. Stacking the organic component can be an alternative energy source continued development of fuel cell is A Microbial Fuel Cell. Microbial Fuel Cells (MFC) is a tool that uses bacteria to generate electricity from organic and non-organic compounds. MFC same tools as usual fuel cell composed of an anode, cathode and electrolyte. Its main advantage is the catalyst in the microbial fuel cell is a microorganism and working conditions carried out in neutral solution, low temperatures, and environmentally friendly than previous fuel cells (Chemistry Fuel Cell). However, when compared to Chemistry Fuel Cell, MFC only have an efficiency of 40%. Therefore, the authors provide a solution in the form of Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Source. Nano-MFC has the advantage of an effective, high efficiency, cheap and environmental friendly. Related stakeholders that helped are government ministers, especially Energy Minister, the Institute for Research, as well as the industry as a production executive facilitator. strategic steps undertaken to achieve that begin from conduct preliminary research, then lab scale testing, and dissemination and build cooperation with related parties (MOU), conduct last research and its applications in the field, then do the licensing and production of Nano-MFC on an industrial scale and publications to the public.Keywords: CNT, efficiency, electric, microorganisms, sediment
Procedia PDF Downloads 409648 Using True Life Situations in a Systems Theory Perspective as Sources of Creativity: A Case Study of how to use Everyday Happenings to produce Creative Outcomes in Novel and Screenplay Writing
Authors: Rune Bjerke
Abstract:
Psychologists incline to see creativity as a mental and psychological process. However, creativity is as well results of cultural and social interactions. Therefore, creativity is not a product of individuals in isolation, but of social systems. Creative people get ideas from the influence of others and the immediate cultural environment – a space of knowledge, situations, and practices. Therefore, in this study we apply the systems theory in practice to activate creativity processes in the production of our novel and screenplay writing. We, as storytellers actively seek to get into situations in our everyday lives, our systems, to generate ideas. Within our personal systems, we have the potential to induce situations to realise ideas to our texts, which may be accepted by our gate-keepers and can become socially validated. This is our method of writing – get into situations, get ideas to texts, and test them with family and friends in our social systems. Example of novel text as an outcome of our method is as follows: “Is it a matter of obviousness or had I read it somewhere, that the one who increases his knowledge increases his pain? And also, the other way around, with increased pain, knowledge increases, I thought. Perhaps such a chain of effects explains why the rebel August Strindberg wrote seven plays in ten months after the divorce with Siri von Essen. Shortly after, he tried painting. Neither the seven theatre plays were shown, nor the paintings were exhibited. I was standing in front of Munch's painting Women in Three Stages with chaotic mental images of myself crumpled in a church and a laughing x-girlfriend watching my suffering. My stomach was turning at unpredictable intervals and the subsequent vomiting almost suffocated me. Love grief at the worst. Was it this pain Strindberg felt? Despite the failure of his first plays, the pain must have triggered a form of creative energy that turned pain into ideas. Suffering, thoughts, feelings, words, text, and then, the reader experience. Maybe this negative force can be transformed into something positive, I asked myself. The question eased my pain. At that moment, I forgot the damp, humid air in the Munch Museum. Is it the similar type of Strindberg-pain that could explain the recurring, depressive themes in Munch's paintings? Illness, death, love and jealousy. As a beginning art student at the master's level, I had decided to find the answer. Was it the same with Munch's pain, as with Strindberg - a woman behind? There had to be women in the case of Munch - therefore, the painting “Women in Three Stages”? Who are they, what personality types are they – the women in red, black and white dresses from left to the right?” We, the writers, are using persons, situations and elements in our systems, in a systems theory perspective, to prompt creative ideas. A conceptual model is provided to advance creativity theory.Keywords: creativity theory, systems theory, novel writing, screenplay writing, sources of creativity in social systems
Procedia PDF Downloads 120647 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test
Authors: Arvind Salem
Abstract:
Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.Keywords: commissions, elections, gerrymandering, redistricting
Procedia PDF Downloads 73646 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy
Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang
Abstract:
In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties
Procedia PDF Downloads 155645 Acrylamide Concentration in Cakes with Different Caloric Sweeteners
Authors: L. García, N. Cobas, M. López
Abstract:
Acrylamide, a probable carcinogen, is formed in high-temperature processed food (>120ºC) when the free amino acid asparagine reacts with reducing sugars, mainly glucose and fructose. Cane juices' repeated heating would potentially form acrylamide during brown sugar production. This study aims to determine if using panela in yogurt cake preparation increases acrylamide formation. A secondary aim is to analyze the acrylamide concentration in four cake confections with different caloric sweetener ingredients: beet sugar (BS), cane sugar (CS), panela (P), and a panela and chocolate mix (PC). The doughs were obtained by combining ingredients in a planetary mixer. A model system made up of flour (25%), caloric sweeteners (25 %), eggs (23%), yogurt (15.7%), sunflower oil (9.4%), and brewer's yeast (2 %) was applied to BS, CS and P cakes. The ingredients of PC cakes varied: flour (21.5 %), panela chocolate (21.5 %), eggs (25.9 %), yogurt (18 %), sunflower oil (10.8 %), and brewer’s yeast (2.3 %). The preparations were baked for 45' at 180 ºC. Moisture was estimated by AOAC. Protein was determined by the Kjeldahl method. Ash percentage was calculated by weight loss after pyrolysis (≈ 600 °C). Fat content was measured using liquid-solid extraction in hydrolyzed raw ingredients and final confections. Carbohydrates were determined by difference and total sugars by the Luff-Schoorl method, based on the iodometric determination of copper ions. Finally, acrylamide content was determined by LC-MS by the isocratic system (phase A: 97.5 % water with 0.1% formic acid; phase B: 2.5 % methanol), using a standard internal procedure. Statistical analysis was performed using SPSS v.23. One-way variance analysis determined differences between acrylamide content and compositional analysis, with caloric sweeteners as fixed effect. Significance levels were determined by applying Duncan's t-test (p<0.05). P cakes showed a lower energy value than the other baked products; sugar content was similar to BS and CS, with 6.1 % mean crude protein. Acrylamide content in caloric sweeteners was similar to previously reported values. However, P and PC showed significantly higher concentrations, probably explained by the applied procedure. Acrylamide formation depends on both reducing sugars and asparagine concentration and availability. Beet sugar samples did not present acrylamide concentrations within the detection and quantification limit. However, the highest acrylamide content was measured in the BS. This may be due to the higher concentration of reducing sugars and asparagine in other raw ingredients. The cakes made with panela, cane sugar, or panela with chocolate did not differ in acrylamide content. The lack of asparagine measures constitutes a limitation. Cakes made with panela showed lower acrylamide formation than products elaborated with beet or cane sugar.Keywords: beet sugar, cane sugar, panela, yogurt cake
Procedia PDF Downloads 66644 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches
Authors: Erin Lawlor
Abstract:
In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.Keywords: radicalisation, deradicalisation, violent extremism, public health
Procedia PDF Downloads 66643 SLAPP Suits: An Encroachment On Human Rights Of A Global Proportion And What Can Be Done About It
Authors: Laura Lee Prather
Abstract:
A functioning democracy is defined by various characteristics, including freedom of speech, equality, human rights, rule of law and many more. Lawsuits brought to intimidate speakers, drain the resources of community members, and silence journalists and others who speak out in support of matters of public concern are an abuse of the legal system and an encroachment of human rights. The impact can have a broad chilling effect, deterring others from speaking out against abuse. This article aims to suggest ways to address this form of judicial harassment. In 1988, University of Denver professors George Pring and Penelope Canan coined the term “SLAPP” when they brought to light a troubling trend of people getting sued for speaking out about matters of public concern. Their research demonstrated that thousands of people engaging in public debate and citizen involvement in government have been and will be the targets of multi-million-dollar lawsuits for the purpose of silencing them and dissuading others from speaking out in the future. SLAPP actions chill information and harm the public at large. Professors Pring and Canan catalogued a tsunami of SLAPP suits filed by public officials, real estate developers and businessmen against environmentalists, consumers, women’s rights advocates and more. SLAPPs are now seen in every region of the world as a means to intimidate people into silence and are viewed as a global affront to human rights. Anti-SLAPP laws are the antidote to SLAPP suits and while commonplace in the United States are only recently being considered in the EU and the UK. This researcher studied more than thirty years of Anti-SLAPP legislative policy in the U.S., the call for evidence and resultant EU Commission’s Anti-SLAPP Directive and Member States Recommendations, the call for evidence by the UK Ministry of Justice, response and Model Anti-SLAPP law presented to UK Parliament, as well as, conducted dozens of interviews with NGO’s throughout the EU, UK, and US to identify varying approaches to SLAPP lawsuits, public policy, and support for SLAPP victims. This paper identifies best practices taken from the US, EU and UK that can be implemented globally to help combat SLAPPs by: (1) raising awareness about SLAPPs, how to identify them, and recognizing habitual abusers of the court system; (2) engaging governments in the policy discussion in combatting SLAPPs and supporting SLAPP victims; (3) educating judges in recognizing SLAPPs an general training on encroachment of human rights; (4) and holding lawyers accountable for ravaging the rule of law.Keywords: Anti-SLAPP Laws and Policy, Comparative media law and policy, EU Anti-SLAPP Directive and Member Recommendations, International Human Rights of Freedom of Expression
Procedia PDF Downloads 68642 Impact of Emotional Intelligence and Cognitive Intelligence on Radio Presenter's Performance in All India Radio, Kolkata, India
Authors: Soumya Dutta
Abstract:
This research paper aims at investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance in the All India Radio, Kolkata (India’s public service broadcaster). The ancient concept of productivity is the ratio of what is produced to what is required to produce it. But, father of modern management Peter F. Drucker (1909-2005) defined productivity of knowledge work and knowledge workers in a new form. In the other hand, the concept of Emotional Intelligence (EI) originated back in 1920’s when Thorndike (1920) for the first time proposed the emotional intelligence into three dimensions, i.e., abstract intelligence, mechanical intelligence, and social intelligence. The contribution of Salovey and Mayer (1990) is substantive, as they proposed a model for emotional intelligence by defining EI as part of the social intelligence, which takes measures the ability of an individual to regulate his/her personal and other’s emotions and feeling. Cognitive intelligence illustrates the specialization of general intelligence in the domain of cognition in ways that possess experience and learning about cognitive processes such as memory. The outcomes of past research on emotional intelligence show that emotional intelligence has a positive effect on social- mental factors of human resource; positive effects of emotional intelligence on leaders and followers in terms of performance, results, work, satisfaction; emotional intelligence has a positive and significant relationship with the teachers' job performance. In this paper, we made a conceptual framework based on theories of emotional intelligence proposed by Salovey and Mayer (1989-1990) and a compensatory model of emotional intelligence, cognitive intelligence, and job performance proposed by Stephen Cote and Christopher T. H. Miners (2006). For investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance, sample size consists 59 radio presenters (considering gender, academic qualification, instructional mood, age group, etc.) from All India Radio, Kolkata station. Questionnaires prepared based on cognitive (henceforth called C based and represented by C1, C2,.., C5) as well as emotional intelligence (henceforth called E based and represented by E1, E2,., E20). These were sent to around 59 respondents (Presenters) for getting their responses. Performance score was collected from the report of program executive of All India Radio, Kolkata. The linear regression has been carried out using all the E-based and C-based variables as the predictor variables. The possible problem of autocorrelation has been tested by having the Durbinson-Watson (DW) Statistic. Values of this statistic, almost within the range of 1.80-2.20, indicate the absence of any significant problem of autocorrelation. The possible problem of multicollinearity has been tested by having the Variable Inflation Factor (VIF) value. Values of this statistic, around within 2, indicates the absence of any significant problem of multicollinearity. It is inferred that the performance scores can be statistically regressed linearly on the E-based and C-based scores, which can explain 74.50% of the variations in the performance.Keywords: cognitive intelligence, emotional intelligence, performance, productivity
Procedia PDF Downloads 164641 Health Counseling in the Republic of Estonia through Magazines (1930 – 1940): Striving for a European Lifestyle
Authors: Merle Talvik, Taimi Tulva, Kristi Puusepp, Ulle Ernits
Abstract:
Background data. This is a study in the field of health humanities. The 1930s were years of rapid cultural and economic development in Europe and in Estonia. The urban way of life the glamorous lifestyle gained popularity, although the society of Estonia in the 1930s had traditionally been agrarian. People's free time increased, which needed to be filled with activities either at home or outside the home. Therefore, the number of popular magazines aimed at housewives increased. More than 200 magazines and bulletins were published in the Republic of Estonia before the Second World War (in 1934, the population of Estonia was 1,126,000). In the 1930s, the Republic of Estonia faced several challenges in healthcare. Infectious diseases, alcoholism, prostitution and child mortality had to be dealt with. Healers without medical education operated in the villages. For the average person, medical care was quite expensive, and despite efforts, by 1940, only 20% of the population was covered by health insurance. Advice published in popular family magazines provided help in solving, understanding and preventing health problems. Aim. The aim of the study is to analyze the health counseling through magazines during the Republic of Estonia (1930-1940) in historical and cultural context. Method. In total, 420 magazine issues were processed. An extensive textual analysis, as well as an analysis of photographs and illustrations from the aspect of health advice was carried out to achieve the research objective. Results. Health counseling was written by well-known doctors of the time, leaders of the abstinence movement and others. There was advice in various areas: prevention of infectious and non-infectious diseases and their treatment with simple methods, first aid, combating sexually transmitted diseases, women's and children's health, mental health, folk medicine techniques, abstinence, healthy eating, skin care, hygiene, introducing pharmacy products. Advice was offered in both written and visual form. Photos and illustrations helped to empower the health advice. Folk heritage and health knowledge of the time were relied upon, and a scientific point of view was popularized. Aspirations towards a European lifestyle were reflected in articles and illustrations. Contribution. The article has an ethnological attitude, and its impact comes down to understanding the history of health care in its socio-cultural context. The health counseling topics of the 1930s are also applicable in today's health education and research. Health counseling builds on the legacy of the past, and it helps to understand that the past is in the future and the main principles of health counseling arise from our history and background.Keywords: estonian republic, health counseling, lifestyle, magazines, media
Procedia PDF Downloads 64640 Polymer Nanocomposite Containing Silver Nanoparticles for Wound Healing
Authors: Patrícia Severino, Luciana Nalone, Daniele Martins, Marco Chaud, Classius Ferreira, Cristiane Bani, Ricardo Albuquerque
Abstract:
Hydrogels produced with polymers have been used in the development of dressings for wound treatment and tissue revitalization. Our study on polymer nanocomposites containing silver nanoparticles shows antimicrobial activity and applications in wound healing. The effects are linked with the slow oxidation and Ag⁺ liberation to the biological environment. Furthermore, bacterial cell membrane penetration and metabolic disruption through cell cycle disarrangement also contribute to microbial cell death. The silver antimicrobial activity has been known for many years, and previous reports show that low silver concentrations are safe for human use. This work aims to develop a hydrogel using natural polymers (sodium alginate and gelatin) combined with silver nanoparticles for wound healing and with antimicrobial properties in cutaneous lesions. The hydrogel development utilized different sodium alginate and gelatin proportions (20:80, 50:50 and 80:20). The silver nanoparticles incorporation was evaluated at the concentrations of 1.0, 2.0 and 4.0 mM. The physico-chemical properties of the formulation were evaluated using ultraviolet-visible (UV-Vis) absorption spectroscopy, Fourier transform infrared (FTIR) spectroscopy, differential scanning calorimetry (DSC), and thermogravimetric (TG) analysis. The morphological characterization was made using transmission electron microscopy (TEM). Human fibroblast (L2929) viability assay was performed with a minimum inhibitory concentration (MIC) assessment as well as an in vivo cicatrizant test. The results suggested that sodium alginate and gelatin in the (80:20) proportion with 4 mM of AgNO₃ in the (UV-Vis) exhibited a better hydrogel formulation. The nanoparticle absorption spectra of this analysis showed a maximum band around 430 - 450 nm, which suggests a spheroidal form. The TG curve exhibited two weight loss events. DSC indicated one endothermic peak at 230-250 °C, due to sample fusion. The polymers acted as stabilizers of a nanoparticle, defining their size and shape. Human fibroblast viability assay L929 gave 105 % cell viability with a negative control, while gelatin presented 96% viability, alginate: gelatin (80:20) 96.66 %, and alginate 100.33 % viability. The sodium alginate:gelatin (80:20) exhibited significant antimicrobial activity, with minimal bacterial growth at a ratio of 1.06 mg.mL⁻¹ in Pseudomonas aeruginosa and 0.53 mg.mL⁻¹ in Staphylococcus aureus. The in vivo results showed a significant reduction in wound surface area. On the seventh day, the hydrogel-nanoparticle formulation reduced the total area of injury by 81.14 %, while control reached a 45.66 % reduction. The results suggest that silver-hydrogel nanoformulation exhibits potential for wound dressing therapeutics.Keywords: nanocomposite, wound healing, hydrogel, silver nanoparticle
Procedia PDF Downloads 101639 Estimating the Efficiency of a Meta-Cognitive Intervention Program to Reduce the Risk Factors of Teenage Drivers with Attention Deficit Hyperactivity Disorder While Driving
Authors: Navah Z. Ratzon, Talia Glick, Iris Manor
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is a chronic disorder that affects the sufferer’s functioning throughout life and in various spheres of activity, including driving. Difficulties in cognitive functioning and executive functions are often part and parcel of the ADHD diagnosis, and thus form a risk factor in driving. Studies examining the effectiveness of intervention programs for improving and rehabilitating driving in typical teenagers have been conducted in relatively small numbers; while studies on similar programs for teenagers with ADHD have been especially scarce. The aim of the present study has been to examine the effectiveness of a metacognitive occupational therapy intervention program for reducing risk factors in driving among teenagers with ADHD. The present study included 37 teenagers aged 17 to 19. They included 23 teenagers with ADHD divided into experimental (11) and control (12) groups; as well as 14 non-ADHD teenagers forming a second control group. All teenagers taking part in the study were examined in the Tel Aviv University driving lab, and underwent cognitive diagnoses and a driving simulator test. Every subject in the intervention group took part in 3 assessment meetings, and two metacognitive treatment meetings. The control groups took part in two assessment meetings with a follow-up meeting 3 months later. In all the study’s groups, the treatment’s effectiveness was tested by comparing monitoring results on the driving simulator at the first and second evaluations. In addition, the driving of 5 subjects from the intervention group was monitored continuously from a month prior to the start of the intervention, a month during the phase of the intervention and another month until the end of the intervention. In the ADHD control group, the driving of 4 subjects was monitored from the end of the first evaluation for a period of 3 months. The study’s findings were affected by the fact that the ADHD control group was different from the two other groups, and exhibited ADHD characteristics manifested by impaired executive functions and lower metacognitive abilities relative to their peers. The study found partial, moderate, non-significant correlations between driving skills and cognitive functions, executive functions, and perceptions and attitudes towards driving. According to the driving simulator test results and the limited sampling results of actual driving, it was found that a metacognitive occupational therapy intervention may be effective in reducing risk factors in driving among teenagers with ADHD relative to their peers with and without ADHD. In summary, the results of the present study indicate a positive direction that speaks to the viability of using a metacognitive occupational therapy intervention program for reducing risk factors in driving. A further study is required that will include a bigger number of subjects, add actual driving monitoring hours, and assign subjects randomly to the various groups.Keywords: ADHD, driving, driving monitoring, metacognitive intervention, occupational therapy, simulator, teenagers
Procedia PDF Downloads 306638 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 225637 The Role of Interest Groups in Foreign Policy: Assessing the Influence of the 'Pro-Jakarta Lobby' in Australia and Indonesia's Bilateral Relations
Authors: Bec Strating
Abstract:
This paper examines the ways that domestic politics and pressure–generated through lobbying, public diplomacy campaigns and other tools of soft power-contributes to the formation of short-term and long-term national interests, priorities and strategies of states in their international relations. It primarily addresses the conceptual problems regarding the kinds of influence that lobby groups wield in foreign policy and how this influence might be assessed. Scholarly attention has been paid to influential foreign policy lobbies and interest groups, particularly in the areas of US foreign policy. Less attention has been paid to how lobby groups might influence the foreign policy of a middle power such as Australia. This paper examines some of the methodological complexities in developing and conducting a research project that can measure the nature and influence of lobbies on foreign affairs priorities and activities. This paper will use Australian foreign policy in the context of its historical bilateral relationship with Indonesia as a case study for considering the broader issues of domestic influences on foreign policy. Specifically, this paper will use the so-called ‘pro-Jakarta lobby’ as an example of an interest group. The term ‘pro-Jakarta lobby’ is used in media commentary and scholarship to describe an amorphous collection of individuals who have sought to influence Australian foreign policy in favour of Indonesia. The term was originally applied to a group of Indonesian experts at the Australian National University in the 1980s but expanded to include journalists, think tanks and key diplomats. The concept of the ‘pro-Jakarta lobby’ was developed largely through criticisms of Australia’s support for Indonesia’s sovereignty of East Timor and West Papua. Pro-Independence supporters were integral for creating the ‘lobby’ in their rhetoric and criticisms about the influence on Australian foreign policy. In these critical narratives, the ‘pro-Jakarta lobby’ supported a realist approach to relations with Indonesia during the years of President Suharto’s regime, which saw appeasement of Indonesia as paramount to values of democracy and human rights. The lobby was viewed as integral in embedding a form of ‘foreign policy exceptionalism’ towards Indonesia in Australian policy-making circles. However, little critical and scholarly attention has been paid to nature, aims, strategies and activities of the ‘pro-Jakarta lobby.' This paper engages with methodological issues of foreign policy analysis: what was the ‘pro-Jakarta lobby’? Why was it considered more successful than other activist groups in shaping policy? And how can its influence on Australia’s approach to Indonesia be tested in relation to other contingent factors shaping policy? In addressing these questions, this case study will assist in addressing a broader scholarly concern about the capacities of collectives or individuals in shaping and directing the foreign policies of states.Keywords: foreign policy, interests groups, Australia, Indonesia
Procedia PDF Downloads 343636 Emergence of Neurodiversity and Awareness of Autism Among School Teachers- A Preliminary Survey
Authors: Tanvi Rajesh Sanghavi
Abstract:
Introduction: Neurodiversity is a concept which captures the different ways in which everyone's brain functions and is considered as part of normal variation. It is a strength-based approach which focuses on the individual's strengths and capabilities and believes in providing support wherever necessary. In many parts of the world, those diagnosed with autism spectrum disorder have been ostracized and ridiculed due to their sensory and communication differences. Hence, it becomes important for the teachers to have knowledge about autism and understand the needs of children with Autism. Need: India is rich in terms of culture, languages and religious diversity. It is important to study neurodiversity in such a population for better understanding of neurodiverse individuals and appropriate intervention. Aim & objectives: This study seeks teachers' knowledge of the causes, traits and educational requirements of children with autism spectrum disorder (ASD). It also aims to find out whether mainstream schools actually provide training programs to the teachers to manage such children along with the necessary accommodations. Method: The current study was a cross-sectional study conducted among school teachers. A total of 30 school teachers were taken for the study. The participants were enrolled after informed consent. The participants were directed to a google form consisting of objective questions. The first part of the questionnaire elicited information about school, teaching experience, qualification, etc. There were specific questions extracting details on attending/conducting sensitization and professional programs in regard to care for autistic children. The second part of the questionnaire consisted of some basic questions on the teacher’s understanding of diagnosis, traits, causes, road to recovery and understanding the educational and communication needs of autistic children from the teacher’s perspective. The responses were tabulated and analyzed descriptively. Results: Most of the teachers had 5–10 years of teaching experience. The majority of the teachers used the term “special child” for autistic children. Around 54.8% (17 teachers) of the total teachers felt that the parents of autistic children should teach their child to learn adaptive skills and 41.9% of the teachers felt that they should take medical intervention. About 50% of the teachers felt that the cause of autism is related to pre-natal maternal factors and about 40% felt that its cause is genetic. Only a small percentage of teachers felt that they were trained to manage the children with autism. More than 50% of the teachers mentioned that their schools do not conduct training programs for managing these children. Discussion & Conclusion: In this study, the knowledge and perspectives of teachers on children with ASD were studied. The most widely held contemporary belief is that genetic factors play a major part in the development of ASD, although the existing evidence is muddled, with numerous opposing perspectives on the nature of this mechanism. It is worth noting that any culture's level of humanity is mirrored in how that society "treats" its vulnerable population.Keywords: autism, neurodiversity, awareness, education
Procedia PDF Downloads 17635 Performance Improvement of a Single-Flash Geothermal Power Plant Design in Iran: Combining with Gas Turbines and CHP Systems
Authors: Morteza Sharifhasan, Davoud Hosseini, Mohammad. R. Salimpour
Abstract:
The geothermal energy is considered as a worldwide important renewable energy in recent years due to rising environmental pollution concerns. Low- and medium-grade geothermal heat (< 200 ºC) is commonly employed for space heating and in domestic hot water supply. However, there is also much interest in converting the abundant low- and medium-grade geothermal heat into electrical power. The Iranian Ministry of Power - through the Iran Renewable Energy Organization (SUNA) – is going to build the first Geothermal Power Plant (GPP) in Iran in the Sabalan area in the Northwest of Iran. This project is a 5.5 MWe single flash steam condensing power plant. The efficiency of GPPs is low due to the relatively low pressure and temperature of the saturated steam. In addition to GPPs, Gas Turbines (GTs) are also known by their relatively low efficiency. The Iran ministry of Power is trying to increase the efficiency of these GTs by adding bottoming steam cycles to the GT to form what is known as combined gas/steam cycle. One of the most effective methods for increasing the efficiency is combined heat and power (CHP). This paper investigates the feasibility of superheating the saturated steam that enters the steam turbine of the Sabalan GPP (SGPP-1) to improve the energy efficiency and power output of the GPP. This purpose is achieved by combining the GPP with two 3.5 MWe GTs. In this method, the hot gases leaving GTs are utilized through a superheater similar to that used in the heat recovery steam generator of combined gas/steam cycle. Moreover, brine separated in the separator, hot gases leaving GTs and superheater are used for the supply of domestic hot water (in this paper, the cycle combined of GTs and CHP systems is named the modified SGPP-1) . In this research, based on the Heat Balance presented in the basic design documents of the SGPP-1, mathematical/numerical model of the power plant are developed together with the mentioned GTs and CHP systems. Based on the required hot water, the amount of hot gasses needed to pass through CHP section directly can be adjusted. For example, during summer when hot water is less required, the hot gases leaving both GTs pass through the superheater and CHP systems respectively. On the contrary, in order to supply the required hot water during the winter, the hot gases of one of the GTs enter the CHP section directly, without passing through the super heater section. The results show that there is an increase in thermal efficiency up to 40% through using the modified SGPP-1. Since the gross efficiency of SGPP-1 is 9.6%, the achieved increase in thermal efficiency is significant. The power output of SGPP-1 is increased up to 40% in summer (from 5.5MW to 7.7 MW) while the GTs power output remains almost unchanged. Meanwhile, the combined-cycle power output increases from the power output of the two separate plants of 12.5 MW [5.5+ (2×3.5)] to the combined-cycle power output of 14.7 [7.7+(2×3.5)]. This output is more than 17% above the output of the two separate plants. The modified SGPP-1 is capable of producing 215 T/Hr hot water ( 90 ºC ) for domestic use in the winter months.Keywords: combined cycle, chp, efficiency, gas turbine, geothermal power plant, gas turbine, power output
Procedia PDF Downloads 322634 Role of Institutional Quality as a Key Determinant of FDI Flows in Developing Asian Economies
Authors: Bikash Ranjan Mishra, Lopamudra D. Satpathy
Abstract:
In the wake of the phenomenal surge in international business in the last decades or more, both the developed and developing economies around the world are in massive competition to attract more and more FDI flows. While the developed countries have marched ahead in the race, the developing countries, especially those of Asian economies, have followed them at a rapid pace. While most of the previous studies have analysed the role of institutional quality in the promotion of FDI flows in developing countries, very few studies have taken an integrated approach of examining the comprehensive impact of institutional quality, globalization pattern and domestic financial development on FDI flows. In this context, the paper contributes to the literature in two important ways. Firstly, two composite indices of institutional quality and domestic financial development for the Asian countries are constructed in comparison to earlier studies that resort to a single variable for indicating the institutional quality and domestic financial development. Secondly, the impact of these variables on FDI flows through their interaction with geographical region is investigated. The study uses panel data covering the time period of 1996 to 2012 by selecting twenty Asian developing countries by emphasizing the quality of institutions from the geographical regions of eastern, south-eastern, southern and western Asia. Control of corruption, better rule of law, regulatory quality, effectiveness of the government, political stability and voice and accountability are used as indicators of institutional quality. Besides these, the study takes into account the domestic credits in the hands of public, private sectors and in stock markets as domestic financial indicators. First in the specification of model, a factor analysis is performed to reduce the vast determinants, which are highly correlated with each other, to a manageable size. Afterwards, a reduced version of the model is estimated with the extracted factors in the form of index as independent variables along with a set of control variables. It is found that the institutional quality index and index of globalization exert a significant effect on FDI inflows of the host countries; in contrast, the domestic financial index does not seem to play much worthy role. Finally, some robustness tests are performed to make sure that the results are not sensitive to temporal and spatial unobserved heterogeneity. On the basis of the above study, one general inference can be drawn from the policy prescription point of view that the government of these developing countries should strengthen their domestic institution, both financial and non-financial. In addition to these, welfare policies should also target for rapid globalization. If the financial and non-financial institutions of these developing countries become sound and grow more globalized in the economic, social and political domain, then they can appeal to more amounts of FDI inflows that will subsequently result in advancement of these economies.Keywords: Asian developing economies, FDI, institutional quality, panel data
Procedia PDF Downloads 314633 Connecting the Dots: Bridging Academia and National Community Partnerships When Delivering Healthy Relationships Programming
Authors: Nicole Vlasman, Karamjeet Dhillon
Abstract:
Over the past four years, the Healthy Relationships Program has been delivered in community organizations and schools across Canada. More than 240 groups have been facilitated in collaboration with 33 organizations. As a result, 2157 youth have been engaged in the programming. The purpose and scope of the Healthy Relationships Program are to offer sustainable, evidence-based skills through small group implementation to prevent violence and promote positive, healthy relationships in youth. The program development has included extensive networking at regional and national levels. The Healthy Relationships Program is currently being implemented, adapted, and researched within the Resilience and Inclusion through Strengthening and Enhancing Relationships (RISE-R) project. Alongside the project’s research objectives, the RISE-R team has worked to virtually share the ongoing findings of the project through a slow ontology approach. Slow ontology is a practice integrated into project systems and structures whereby slowing the pace and volume of outputs offers creative opportunities. Creative production reveals different layers of success and complements the project, the building blocks for sustainability. As a result of integrating a slow ontology approach, the RISE-R team has developed a Geographic Information System (GIS) that documents local landscapes through a Story Map feature, and more specifically, video installations. Video installations capture the cartography of space and place within the context of singular diverse community spaces (case studies). By documenting spaces via human connections, the project captures narratives, which further enhance the voices and faces of the community within the larger project scope. This GIS project aims to create a visual and interactive flow of information that complements the project's mixed-method research approach. Conclusively, creative project development in the form of a geographic information system can provide learning and engagement opportunities at many levels (i.e., within community organizations and educational spaces or with the general public). In each of these disconnected spaces, fragmented stories are connected through a visual display of project outputs. A slow ontology practice within the context of the RISE-R project documents activities on the fringes and within internal structures; primarily through documenting project successes as further contributions to the Centre for School Mental Health framework (philosophy, recruitment techniques, allocation of resources and time, and a shared commitment to evidence-based products).Keywords: community programming, geographic information system, project development, project management, qualitative, slow ontology
Procedia PDF Downloads 155632 Evaluation of the Cytotoxicity and Cellular Uptake of a Cyclodextrin-Based Drug Delivery System for Cancer Therapy
Authors: Caroline Mendes, Mary McNamara, Orla Howe
Abstract:
Drug delivery systems are proposed for use in cancer treatment to specifically target cancer cells and deliver a therapeutic dose without affecting normal cells. For that purpose, the use of folate receptors (FR) can be considered a key strategy, since they are commonly over-expressed in cancer cells. In this study, cyclodextrins (CD) have being used as vehicles to target FR and deliver the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within their cavities. Here, β-CD has been modified using folic acid so as to specifically target the FR. Thus, this drug delivery system consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 15.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 16.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 10.5 for A549 and 132.6 µM ± 16.1 and 288.1 µM ± 26.3 for BEAS-2B. These results demonstrate that free MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug. The use of cell imaging by confocal microscopy has allowed visualisation of FR targeting in cancer cells, as well as the identification of the interlisation pathway of the drug. Hence, the cellular uptake and internalisation process of this drug delivery system is being addressed.Keywords: cancer treatment, cyclodextrins, drug delivery, folate receptors, reduced folate carriers
Procedia PDF Downloads 310631 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring
Authors: Zheng Wang, Zhenhong Li, Jon Mills
Abstract:
Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring
Procedia PDF Downloads 161