Search results for: brand image transfer
151 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)
Authors: Ahmad Kayvani Fard, Yehia Manawi
Abstract:
Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation
Procedia PDF Downloads 227150 Financing the Welfare State in the United States: The Recent American Economic and Ideological Challenges
Authors: Rafat Fazeli, Reza Fazeli
Abstract:
This paper focuses on the study of the welfare state and social wage in the leading liberal economy of the United States. The welfare state acquired a broad acceptance as a major socioeconomic achievement of the liberal democracy in the Western industrialized countries during the postwar boom period. The modern and modified vision of capitalist democracy offered, on the one hand, the possibility of high growth rate and, on the other hand, the possibility of continued progression of a comprehensive system of social support for a wider population. The economic crises of the 1970s, provided the ground for a great shift in economic policy and ideology in several Western countries, most notably the United States and the United Kingdom (and to a lesser extent Canada under Prime Minister Brian Mulroney). In the 1980s, the free market oriented reforms undertaken under Reagan and Thatcher greatly affected the economic outlook not only of the United States and the United Kingdom, but of the whole Western world. The movement which was behind this shift in policy is often called neo-conservatism. The neoconservatives blamed the transfer programs for the decline in economic performance during the 1970s and argued that cuts in spending were required to go back to the golden age of full employment. The agenda for both Reagan and Thatcher administrations was rolling back the welfare state, and their budgets included a wide range of cuts for social programs. The question is how successful were Reagan and Thatcher’s efforts to achieve retrenchment? The paper involves an empirical study concerning the distributive role of the welfare state in the two countries. Other studies have often concentrated on the redistributive effect of fiscal policy on different income brackets. This study examines the net benefit/ burden position of the working population with respect to state expenditures and taxes in the postwar period. This measurement will enable us to find out whether the working population has received a net gain (or net social wage). This study will discuss how the expansion of social expenditures and the trend of the ‘net social wage’ can be linked to distinct forms of economic and social organizations. This study provides an empirical foundation for analyzing the growing significance of ‘social wage’ or the collectivization of consumption and the share of social or collective consumption in total consumption of the working population in the recent decades. The paper addresses three other major questions. The first question is whether the expansion of social expenditures has posed any drag on capital accumulation and economic growth. The findings of this study provide an analytical foundation to evaluate the neoconservative claim that the welfare state is itself the source of economic stagnation that leads to the crisis of the welfare state. The second question is whether the increasing ideological challenges from the right and the competitive pressures of globalization have led to retrenchment of the American welfare states in the recent decades. The third question is how social policies have performed in the presence of the rising inequalities in the recent decades.Keywords: the welfare state, social wage, The United States, limits to growth
Procedia PDF Downloads 209149 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada
Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone
Abstract:
Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.Keywords: cameras, monitoring, recreational fishing, stock assessment
Procedia PDF Downloads 122148 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals
Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly
Abstract:
Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery
Procedia PDF Downloads 317147 Placenta A Classical Caesarean Section with Peripartum Hysterectomy at 27+3 Weeks Gestation For Placnta Accreta
Authors: Huda Abdelrhman Osman Ahmed, Paul Feyi Waboso
Abstract:
Introduction: Placenta accreta spectrum (PAS) disorders present a significant challenge in obstetric management due to the high risk of hemorrhage and potential complications at delivery. This case describes a 27+3 weeks gestation in a patient with placenta accreta managed with classical cesarean section and peripartum hysterectomy. Case Description: AGravida 4P3 patient presented at 27+3 weeks gestation with painless, unprovoked vaginal bleeding and an estimated blood loss (EBL) of 300 mL. At the 20+5 week anomaly scan, a placenta previa was identified anterior, covering the os anterior uterus and containing lacunae with signs of myometrial thinning. At a 24+1 week scan conducted at a tertiary center, further imaging indicated placenta increta with invasion into the myometrium and potential areas of placenta percreta. The patient’s past obstetric history included three previous cesarean sections, with no significant medical or surgical history. Social history revealed heavy smoking but no alcohol use. No drug allergies were reported. Given the risks associated with PAS, a management plan was formulated, including an MRI at a later stage and cesarean delivery with a possible hysterectomy between 34-36 weeks. However, at 27+3 weeks, the patient experienced another episode of vaginal bleeding EBL 500 ml, necessitating immediate intervention. Management: As the patient was unstable, she was not transferred to the tertiary center. Completed and informed consent was obtained. MDT planning-group and cross-matching 4 units, uterotonics. Tranexamic acid blood products, cryo, cell salvage, 2 obstetric consultants and an anesthetic consultant, blood bank aware and hematologist. HDU bed and ITU availability. This study assisted in performing a classical Caesarean section, Where the urologist inserted JJ ureteric stents. Following this, we also assisted in a total abdominal hysterectomy with the conservation of ovaries. 4 units RBC and 1 unit FFP were transfused. The total blood loss was 2.3 L. Outcome: The procedure successfully achieved hemostasis, and the neonate was delivered with subsequent transfer to a neonatal intensive care unit for management. The patient’s postoperative course was monitored closely with no immediate complications. Discussion: This case highlights the complexity and urgency in managing placenta accreta spectrum disorders, particularly with the added challenges posed by remote location and limited tertiary support. The need for rapid decision-making and interdisciplinary coordination is emphasized in such high-risk obstetric cases. The case also underscores the potential for surgical intervention and the importance of family involvement in emergent care decisions. Conclusion: Placenta accreta spectrum disorders demand meticulous planning and timely intervention. This case contributes to understanding PAS management at earlier gestational ages and provides insights into the challenges posed by access to tertiary care, especially in urgent situations.Keywords: Accreta, Hysterectomy, 3MDT, prematurity
Procedia PDF Downloads 10146 MEIOSIS: Museum Specimens Shed Light in Biodiversity Shrinkage
Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki
Abstract:
Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third "universal" response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory, and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors, and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyse trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ, and a final dataset containing 31922 samples was used for analysis. Setting time as a smoother variable, species identity as a random factor, and the length of right-wing size (a proxy for body size) as the response variable, we ran a global model for a maximum period of 110 years (1900 – 2010). Then, we investigated functional variability between different terrestrial biomes in a second model. Both models confirmed our initial hypothesis and resulted in a decreasing trend in body size over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.Keywords: butterflies, shrinking body size, museum specimens, climate change
Procedia PDF Downloads 10145 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 160144 Effects of the Exit from Budget Support on Good Governance: Findings from Four Sub-Saharan Countries
Authors: Magdalena Orth, Gunnar Gotz
Abstract:
Background: Domestic accountability, budget transparency and public financial management (PFM) are considered vital components of good governance in developing countries. The aid modality budget support (BS) promotes these governance functions in developing countries. BS engages in political decision-making and provides financial and technical support to poverty reduction strategies of the partner countries. Nevertheless, many donors have withdrawn their support from this modality due to cases of corruption, fraud or human rights violations. This exit from BS is leaving a finance and governance vacuum in the countries. The evaluation team analyzed the consequences of terminating the use of this modality and found particularly negative effects for good governance outcomes. Methodology: The evaluation uses a qualitative (theory-based) approach consisting of a comparative case study design, which is complemented by a process-tracing approach. For the case studies, the team conducted over 100 semi-structured interviews in Malawi, Uganda, Rwanda and Zambia and used four country-specific, tailor-made budget analysis. In combination with a previous DEval evaluation synthesis on the effects of BS, the team was able to create a before-and-after comparison that yields causal effects. Main Findings: In all four countries domestic accountability and budget transparency declined if other forms of pressure are not replacing BS´s mutual accountability mechanisms. In Malawi a fraud scandal created pressure from the society and from donors so that accountability was improved. In the other countries, these pressure mechanisms were absent so that domestic accountability declined. BS enables donors to actively participate in political processes of the partner country as donors transfer funds into the treasury of the partner country and conduct a high-level political dialogue. The results confirm that the exit from BS created a governance vacuum that, if not compensated through external/internal pressure, leads to a deterioration of good governance. For example, in the case of highly aid dependent Malawi did the possibility of a relaunch of BS provide sufficient incentives to push for governance reforms. Overall the results show that the three good governance areas are negatively affected by the exit from BS. This stands in contrast to positive effects found before the exit. The team concludes that the relationship is causal, because the before-and-after comparison coherently shows that the presence of BS correlates with positive effects and the absence with negative effects. Conclusion: These findings strongly suggest that BS is an effective modality to promote governance and its abolishment is likely to cause governance disruptions. Donors and partner governments should find ways to re-engage in closely coordinated policy-based aid modalities. In addition, a coordinated and carefully managed exit-strategy should be in place before an exit from similar modalities is considered. Particularly a continued framework of mutual accountability and a high-level political dialogue should be aspired to maintain pressure and oversight that is required to achieve good governance.Keywords: budget support, domestic accountability, public financial management and budget transparency, Sub-Sahara Africa
Procedia PDF Downloads 151143 The Disease That 'Has a Woman Face': Feminization of HIV/AIDS in Nagaland, North-East India
Authors: Kitoholi V. Zhimo
Abstract:
Unlike the cases of cases of homosexuals, haemophilic and or drug users in USA, France, Africa and other countries, in India the first case of HIV/AIDS was detected in heterosexual female sex workers (FSW) in Chennai in 1986. This image played an important role in understanding HIV/AIDS scenario in the country. Similar to popular and dominant metaphors on HIV/AIDS such as ‘gay plague’, ‘new cancer’, ‘lethal disease’, ‘slim disease’, ‘foreign disease’, ‘junkie disease’, etc. around the world, the social construction of the virus was largely attributed to women in India. It was established that women particularly sex workers are ‘carrier’ and ‘transmitter’ of virus and were categorised as High Risk Groups (HRG’s) alongside homosexuals, transgenders and injecting drug users. Recent literature reveals growing rate of HIV infection among housewives since 1997 which revolutionised public health scenario in India. This means shift from high risk group to general public through ‘bridge population’ encompassing long distance truckers and migrant labours who at the expense of their nature of work and mobility comes in contact with HRG’s and transmit the virus to the general public especially women who are confined to the domestic space. As HIV epidemic expands, married women in monogamous relationship/marriage stand highly susceptible to infection with limited control, right and access over their sexual and reproductive health and planning. In context of Nagaland, a small state in North-eastern part of India HIV/AIDS transmission through injecting drug use dominated the early scene of the epidemic. However, paradigm shift occurred with declining trend of HIV prevalence among injecting drug users (IDU’s) over the past years with the introduction of Opioid Substitution Therapy (OST) and easy access/availability of syringes and injecting needles. Reflection on statistical data reveals that out of 36 states and union territories in India, the position of Nagaland in HIV prevalence among IDU’s has significantly dropped down from 6th position in 2003 to 16th position in 2017. The present face of virus in Nagaland is defined by (hetero) sexual mode of transmission which accounts for about 91% of as reported by Nagaland state AIDS control society (NSACS) in 2016 wherein young and married woman were found to be most affected leading to feminization of HIV/AIDS epidemic in the state. Thus, not only is HIV epidemic feminised but emerged victim to domestic violence which is more often accepted as normal part of heterosexual relationship. In the backdrop of these understanding, the present paper based on ethnographic fieldwork explores the plight, lived experiences and images of HIV+ve women with regard to sexual and reproductive rights against the backdrop of patriarchal system in Nagaland.Keywords: HIV/AIDS, monogamy, Nagaland, sex worker disease, women
Procedia PDF Downloads 161142 Embryonic Aneuploidy – Morphokinetic Behaviors as a Potential Diagnostic Biomarker
Authors: Banafsheh Nikmehr, Mohsen Bahrami, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Mallory Pitts, Tolga B. Mesen, Tamer M. Yalcinkaya
Abstract:
The number of people who receive in vitro fertilization (IVF) treatment has increased on a startling trajectory over the past two decades. Despite advances in this field, particularly the introduction of intracytoplasmic sperm injection (ICSI) and the preimplantation genetic screening (PGS), the IVF success remains low. A major factor contributing to IVF failure is embryonic aneuploidy (abnormal chromosome content), which often results in miscarriage and birth defects. Although PGS is often used as the standard diagnostic tool to identify aneuploid embryos, it is an invasive approach that could affect the embryo development, and yet inaccessible to many patients due its high costs. As such, there is a clear need for a non-invasive cost-effective approach to identify euploid embryos for single embryo transfer (SET). The reported differences between morphokinetic behaviors of aneuploid and euploid embryos has shown promise to address this need. However, current literature is inconclusive and further research is urgently needed to translate current findings into clinical diagnostics. In this ongoing study, we found significant differences between morphokinetic behaviors of euploid and aneuploid embryos that provides important insights and reaffirms the promise of such behaviors for developing non-invasive methodologies. Methodology—A total of 242 embryos (euploid: 149, aneuploid: 93) from 74 patients who underwent IVF treatment in Carolinas Fertility Clinics in Winston-Salem, NC, were analyzed. All embryos were incubated in an EmbryoScope incubator. The patients were randomly selected from January 2019 to June 2021 with most patients having both euploid and aneuploid embryos. All embryos reached the blastocyst stage and had known PGS outcomes. The ploidy assessment was done by a third-party testing laboratory on day 5-7 embryo biopsies. The morphokinetic variables of each embryo were measured by the EmbryoViewer software (Uniesense FertiliTech) on time-lapse images using 7 focal depths. We compared the time to: pronuclei fading (tPNf), division to 2,3,…,9 cells (t2, t3,…,t9), start of embryo compaction (tSC), Morula formation (tM), start of blastocyst formation (tSC), blastocyst formation (tB), and blastocyst expansion (tEB), as well as intervals between them (e.g., c23 = t3 – t2). We used a mixed regression method for our statistical analyses to account for the correlation between multiple embryos per patient. Major Findings— The average age of the patients was 35.04 yrs. The average patient age associated with euploid and aneuploid embryos was not different (P = 0.6454). We found a significant difference in c45 = t5-t4 (P = 0.0298). Our results indicated this interval on average lasts significantly longer for aneuploid embryos - c45(aneuploid) = 11.93hr vs c45(euploid) = 7.97hr. In a separate analysis limited to embryos from the same patients (patients = 47, total embryos=200, euploid=112, aneuploid=88), we obtained the same results (P = 0.0316). The statistical power for this analysis exceeded 87%. No other variable was different between the two groups. Conclusion— Our results demonstrate the importance of morphokinetic variables as potential biomarkers that could aid in non-invasively characterizing euploid and aneuploid embryos. We seek to study a larger population of embryos and incorporate the embryo quality in future studies.Keywords: IVF, embryo, euploidy, aneuploidy, morphokinteic
Procedia PDF Downloads 88141 Training Hearing Parents in SmiLE Therapy Supports the Maintenance and Generalisation of Deaf Children's Social Communication Skills
Authors: Martina Curtin, Rosalind Herman
Abstract:
Background: Deaf children can experience difficulties with understanding how social interaction works, particularly when communicating with unfamiliar hearing people. Deaf children often struggle with integrating into a mainstream, hearing environments. These negative experiences can lead to social isolation, depression and other mental health difficulties later in life. smiLE Therapy (Schamroth, 2015) is a video-based social communication intervention that aims to teach deaf children skills to confidently communicate with unfamiliar hearing people. Although two previous studies have reported improvements in communication skills immediately post intervention, evidence for maintenance of gains or generalisation of skills (i.e., the transfer of newly learnt skills to untrained situations) has not to date been demonstrated. Parental involvement has been shown to support deaf children’s therapy outcomes. Therefore, this study added parent training to the therapy children received to investigate the benefits to generalisation of children’s skills. Parents were also invited to present their perspective on the training they received. Aims: (1) To assess pupils’ progress from pre- to post-intervention in trained and untrained tasks, (2) to investigate if training parents improved their (a) understanding of their child’s needs and (b) their skills in supporting their child appropriately in smiLE Therapy tasks, (3) to assess if parent training had an impact on the pupil’s ability to (a) maintain their skills in trained tasks post-therapy, and (b) generalise their skills in untrained, community tasks. Methods: This was a mixed-methods, repeated measures study. 31 deaf pupils (aged between 7 and 14) received an hour of smiLE Therapy per week, for 6 weeks. Communication skills were assessed pre-, post- and 3-months post-intervention using the Communication Skills Checklist. Parents were then invited to attend two training sessions and asked to bring a video of their child communicating in a shop or café. These videos were used to assess whether, after parent training, the child was able to generalise their skills to a new situation. Finally, parents attended a focus group to discuss the effectiveness of the therapy, particularly the wider impact, i.e., more child participation within the hearing community. Results: All children significantly improved their scores following smiLE therapy and maintained these skills to high level. Children generalised a high percentage of their newly learnt skills to an untrained situation. Parents reported improved understanding of their child’s needs, their child’s potential and in how to support them in real-life situations. Parents observed that their children were more confident and independent when carrying out communication tasks with unfamiliar hearing people. Parents realised they needed to ‘let go’ and embrace their child’s independence and provide more opportunities for them to participate in their community. Conclusions: This study adds to the evidence base on smiLE Therapy; it is an effective intervention that develops deaf children’s ability to interact competently with unfamiliar, hearing, communication partners. It also provides preliminary evidence of the benefits of parent training in helping children to generalise their skills to other situations. These findings will be of value to therapists wishing to develop deaf children’s communication skills beyond the therapy setting.Keywords: deaf children, generalisation, parent involvement, social communication
Procedia PDF Downloads 139140 Potential of Dredged Material for CSEB in Building Structure
Authors: BoSheng Liu
Abstract:
The research goal is to re-image a locally-sourced waste product as abuilding material. The author aims to contribute to the compressed stabilized earth block (CSEB) by investigating the promising role of dredged material as an alternative building ingredient in the production of bricks and tiles. Dredged material comes from the sediment deposited near the shore or downstream, where the water current velocity decreases. This sediment needs to be dredged to provide water transportation; thus, there are mounds of the dredged material stored at bay. It is the interest of this research to reduce the filtered un-organic soil in the production of CSEB and replace it with locally dredged material from the Atchafalaya River in Morgan City, Louisiana. Technology and mechanical innovations have evolved the traditional adobe production method, which mixes the soil and natural fiber into molded bricks, into chemically stabilized CSEB made by compressing the clay mixture and stabilizer in a compression chamber with particular loads. In the case of dredged material CSEB (DM-CSEB), cement plays an essential role as the bending agent contributing to the unit strength while sustaining the filtered un-organic soil. Each DM-CSEB unit is made in a compression chamber with 580 PSI (i.e., 4 MPa) force. The research studied the cement content from 5% to 10% along with the range of dredged material mixtures, which differed from 20% to 80%. The material mixture content affected the DM-CSEB's strength and workability during and after its compression. Results indicated two optimal workabilities of the mixture: 27% fine clay content and 63% dredged material with 10% cement, or 28% fine clay content, and 67% dredged material with 5% cement. The final product of DM-CSEB emitted between 10 to 13 times fewer carbon emissions compared to the conventional fired masonry structure. DM-CSEB satisfied the strength requirement given by the ASTM C62 and ASTM C34 standards for construction material. One of the final evaluations tested and validated the material performance by designing and constructing an architectural, conical tile-vault prototype that was 28" by 40" by 24." The vault utilized a computational form-finding approach to generate the form's geometry, which optimized the correlation between the vault geometry and structural load distribution. A series of scaffolding was deployed to create the framework for the tile-vault construction. The final tile-vault structure was made from 2 layers of DM-CSEB tiles jointed by mortar, and the construction of the structure used over 110 tiles. The tile-vault prototype was capable of carrying over 400 lbs of live loads, which further demonstrated the dredged material feasibility as a construction material. The presented case study of Dredged Material Compressed Stabilized Earth Block (DM-CSEB) provides the first impression of dredged material in the clayey mixture process, structural performance, and construction practice. Overall, the approach of integrating dredged material in building material can be feasible, regionally sourced, cost-effective, and environment-friendly.Keywords: dredged material, compressed stabilized earth block, tile-vault, regionally sourced, environment-friendly
Procedia PDF Downloads 115139 Active Learning through a Game Format: Implementation of a Nutrition Board Game in Diabetes Training for Healthcare Professionals
Authors: Li Jiuen Ong, Magdalin Cheong, Sri Rahayu, Lek Alexander, Pei Ting Tan
Abstract:
Background: Previous programme evaluations from the diabetes training programme conducted in Changi General Hospital revealed that healthcare professionals (HCPs) are keen to receive advance diabetes training and education, specifically in medical, nutritional therapy. HCPs also expressed a preference for interactive activities over didactic teaching methods to enhance their learning. Since the War on Diabetes was initiated by MOH in 2016, HCPs are challenged to be actively involved in continuous education to be better equipped to reduce the growing burden of diabetes. Hence, streamlining training to incorporate an element of fun is of utmost importance. Aim: The nutrition programme incorporates game play using an interactive board game that aims to provide a more conducive and less stressful environment for learning. The board game could be adapted for training of community HCPs, health ambassadors or caregivers to cope with the increasing demand of diabetes care in the hospital and community setting. Methodology: Stages for game’s conception (Jaffe, 2001) were adopted in the development of the interactive board game ‘Sweet Score™ ’ Nutrition concepts and topics in diabetes self-management are embedded into the game elements of varying levels of difficulty (‘Easy,’ ‘Medium,’ ‘Hard’) including activities such as a) Drawing/ sculpting (Pictionary-like) b)Facts/ Knowledge (MCQs/ True or False) Word definition) c) Performing/ Charades To study the effects of game play on knowledge acquisition and perceived experiences, participants were randomised into two groups, i.e., lecture group (control) and game group (intervention), to test the difference. Results: Participants in both groups (control group, n= 14; intervention group, n= 13) attempted a pre and post workshop quiz to assess the effectiveness of knowledge acquisition. The scores were analysed using paired T-test. There was an improvement of quiz scores after attending the game play (mean difference: 4.3, SD: 2.0, P<0.001) and the lecture (mean difference: 3.4, SD: 2.1, P<0.001). However, there was no significance difference in the improvement of quiz scores between gameplay and lecture (mean difference: 0.9, 95%CI: -0.8 to 2.5, P=0.280). This suggests that gameplay may be as effective as a lecture in terms of knowledge transfer. All the13 HCPs who participated in the game rated 4 out of 5 on the likert scale for the favourable learning experience and relevance of learning to their job, whereas only 8 out of 14 HCPs in the lecture reported a high rating in both aspects. 16. Conclusion: There is no known board game currently designed for diabetes training for HCPs.Evaluative data from future training can provide insights and direction to improve the game format and cover other aspects of diabetes management such as self-care, exercise, medications and insulin management. Further testing of the board game to ensure learning objectives are met is important and can assist in the development of awell-designed digital game as an alternative training approach during the COVID-19 pandemic. Learning through gameplay increases opportunities for HCPs to bond, interact and learn through games in a relaxed social setting and potentially brings more joy to the workplace.Keywords: active learning, game, diabetes, nutrition
Procedia PDF Downloads 174138 Development of Adaptive Proportional-Integral-Derivative Feeding Mechanism for Robotic Additive Manufacturing System
Authors: Andy Alubaidy
Abstract:
In this work, a robotic additive manufacturing system (RAMS) that is capable of three-dimensional (3D) printing in six degrees of freedom (DOF) with very high accuracy and virtually on any surface has been designed and built. One of the major shortcomings in existing 3D printer technology is the limitation to three DOF, which results in prolonged fabrication time. Depending on the techniques used, it usually takes at least two hours to print small objects and several hours for larger objects. Another drawback is the size of the printed objects, which is constrained by the physical dimensions of most low-cost 3D printers, which are typically small. In such cases, large objects are produced by dividing them into smaller components that fit the printer’s workable area. They are then glued, bonded or otherwise attached to create the required object. Another shortcoming is material constraints and the need to fabricate a single part using different materials. With the flexibility of a six-DOF robot, the RAMS has been designed to overcome these problems. A feeding mechanism using an adaptive Proportional-Integral-Derivative (PID) controller is utilized along with a national instrument compactRIO (NI cRIO), an ABB robot, and off-the-shelf sensors. The RAMS have the ability to 3D print virtually anywhere in six degrees of freedom with very high accuracy. It is equipped with an ABB IRB 120 robot to achieve this level of accuracy. In order to convert computer-aided design (CAD) files to digital format that is acceptable to the robot, Hypertherm Robotic Software Inc.’s state-of-the-art slicing software called “ADDMAN” is used. ADDMAN is capable of converting any CAD file into RAPID code (the programing language for ABB robots). The robot uses the generated code to perform the 3D printing. To control the entire process, National Instrument (NI) compactRIO (cRio 9074), is connected and communicated with the robot and a feeding mechanism that is designed and fabricated. The feeding mechanism consists of two major parts, cold-end and hot-end. The cold-end consists of what is conventionally known as an extruder. Typically, a stepper-motor is used to control the push on the material, however, for optimum control, a DC motor is used instead. The hot-end consists of a melt-zone, nozzle, and heat-brake. The melt zone ensures a thorough melting effect and consistent output from the nozzle. Nozzles are made of brass for thermo-conductivity while the melt-zone is comprised of a heating block and a ceramic heating cartridge to transfer heat to the block. The heat-brake ensures that there is no heat creep-up effect as this would swell the material and prevent consistent extrusion. A control system embedded in the cRio is developed using NI Labview which utilizes adaptive PID to govern the heating cartridge in conjunction with a thermistor. The thermistor sends temperature feedback to the cRio, which will issue heat increase or decrease based on the system output. Since different materials have different melting points, our system will allow us to adjust the temperature and vary the material.Keywords: robotic, additive manufacturing, PID controller, cRIO, 3D printing
Procedia PDF Downloads 217137 Tales of Two Cities: 'Motor City' Detroit and 'King Cotton' Manchester: Transatlantic Transmissions and Transformations, Flows of Communications, Commercial and Cultural Connections
Authors: Dominic Sagar
Abstract:
Manchester ‘King Cotton’, the first truly industrial city of the nineteenth century, passing on the baton to Detroit ‘Motor City’, is the first truly modern city. We are exploring the tales of the two cities, their rise and fall and subsequent post-industrial decline, their transitions and transformations, whilst alongside paralleling their corresponding, commercial, cultural, industrial and even agricultural, artistic and musical transactions and connections. The paper will briefly contextualize how technologies of the industrial age and modern age have been instrumental in the development of these cities and other similar cities including New York. However, the main focus of the study will be the present and more importantly the future, how globalisation and the advancements of digital technologies and industries have shaped the cities developments from AlanTuring and the making of the first programmable computer to the effect of digitalisation and digital initiatives. Manchester now has a thriving creative digital infrastructure of Digilabs, FabLabs, MadLabs and hubs, the study will reference the Smart Project and the Manchester Digital Development Association whilst paralleling similar digital and creative industrial initiatives now starting to happen in Detroit. The paper will explore other topics including the need to allow for zones of experimentation, areas to play, think and create in order develop and instigate new initiatives and ideas of production, carrying on the tradition of influential inventions throughout the history of these key cities. Other topics will be briefly touched on, such as urban farming, citing the Biospheric foundation in Manchester and other similar projects in Detroit. However, the main thread will focus on the music industries and how they are contributing to the regeneration of cities. Musically and artistically, Manchester and Detroit have been closely connected by the flow and transmission of information and transfer of ideas via ‘cars and trains and boats and planes’ through to the new ‘super highway’. From Detroit to Manchester often via New York and Liverpool and back again, these musical and artistic connections and flows have greatly affected and influenced both cities and the advancement of technology are still connecting the cities. In summary two hugely important industrial cities, subsequently both experienced massive decline in fortunes, having had their large industrial hearts ripped out, ravaged leaving dying industrial carcasses and car crashes of despair, dereliction, desolation and post-industrial wastelands vacated by a massive exodus of the cities’ inhabitants. To examine the affinity, similarity and differences between Manchester & Detroit, from their industrial importance to their post-industrial decline and their current transmutations, transformations, transient transgressions, cities in transition; contrasting how they have dealt with these problems and how they can learn from each other. With a view to framing these topics with regard to how various communities have shaped these cities and the creative industries and design [the new cotton/car manufacturing industries] are reinventing post-industrial cities, to speculate on future development of these themes in relation to Globalisation, digitalisation and how cities can function to develop solutions to communal living in cities of the future.Keywords: cultural capital, digital developments, musical initiatives, zones of experimentation
Procedia PDF Downloads 194136 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 73135 The New Waterfront: Examining the Impact of Planning on Waterfront Regeneration in Da Nang
Authors: Ngoc Thao Linh Dang
Abstract:
Urban waterfront redevelopment is a global phenomenon, and thousands of schemes are being carried out in large metropoles, medium-sized cities, and even small towns all over the world. This opportunity brings the city back to the river and rediscovers waterfront revitalization as a unique opportunity for cities to reconnect with their unique historical and cultural image. The redevelopment can encourage economic investments, serve as a social platform for public interactions, and allow dwellers to express their rights to the city. Many coastal cities have effectively transformed the perception of their waterfront area through years of redevelopment initiatives, having been neglected for over a century. However, this process has never been easy due to the particular complexity of the space: local culture, history, and market-led development. Moreover, municipal governments work out the balance of diverse stakeholder interests, especially when repurposing high-profile and redundant spaces that form the core of urban economic investment while also accommodating the present and future generations in sustainable environments. Urban critics consistently grapple with the effectiveness of the planning process on the new waterfront, where public spaces are criticized for presenting a lack of opportunities for actual public participation due to privatization and authoritarian governance while no longer doing what they are ‘meant to’: all arise in reaction to the perceived failure of these places to meet expectations. The planning culture and the decision-making context determine the level of public involvement in the planning process; however, in the context of competing market forces and commercial interests dominating cities’ planning agendas, planning for public space in urban waterfronts tends to be for economic gain rather than supporting residents' social needs. These newly pleasing settings satisfied the cluster of middle-class individuals, new communities living along the waterfront, and tourists. A trend of public participatory exclusion is primarily determined by the nature of the planning being undertaken and the decision-making context in which it is embedded. Starting from this context, the research investigates the influence of planning on waterfront regeneration and the role of participation in this process. The research aims to look specifically at the characteristics of the planning process of the waterfront in Da Nang and its impact on the regeneration of the place to regain the city’s historical value and enhance local cultural identity and images. Vietnam runs a top-down planning system where municipal governments have control or power over what happens in their city following the approved planning from the national government. The community has never been excluded from development; however, their participation is still marginalized. In order to ensure social equality, a proposed approach called "bottom-up" should be considered and implemented alongside the traditional "top-down" process and provide a balance of perspectives, as it allows for the voices of the most underprivileged social group involved in a planning project to be heard, rather than ignored. The research provides new insights into the influence of the planning process on the waterfront regeneration in the context of Da Nang.Keywords: planning process, public participation, top-down planning, waterfront regeneration
Procedia PDF Downloads 71134 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation
Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang
Abstract:
Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation
Procedia PDF Downloads 68133 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 219132 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 148131 Measuring Green Growth Indicators: Implication for Policy
Authors: Hanee Ryu
Abstract:
The former president Lee Myung-bak's administration of Korea presented “green growth” as a catchphrase from 2008. He declared “low-carbon, green growth” the nation's vision for the next decade according to United Nation Framework on Climate Change. The government designed omnidirectional policy for low-carbon and green growth with concentrating all effort of departments. The structural change was expected because this slogan is the identity of the government, which is strongly driven with the whole department. After his administration ends, the purpose of this paper is to quantify the policy effect and to compare with the value of the other OECD countries. The major target values under direct policy objectives were suggested, but it could not capture the entire landscape on which the policy makes changes. This paper figures out the policy impacts through comparing the value of ex-ante between the one of ex-post. Furthermore, each index level of Korea’s low-carbon and green growth comparing with the value of the other OECD countries. To measure the policy effect, indicators international organizations have developed are considered. Environmental Sustainable Index (ESI) and Environmental Performance Index (EPI) have been developed by Yale University’s Center for Environmental Law and Policy and Columbia University’s Center for International Earth Science Information Network in collaboration with the World Economic Forum and Joint Research Center of European Commission. It has been widely used to assess the level of natural resource endowments, pollution level, environmental management efforts and society’s capacity to improve its environmental performance over time. Recently OCED publish the Green Growth Indicator for monitoring progress towards green growth based on internationally comparable data. They build up the conceptual framework and select indicators according to well specified criteria: economic activities, natural asset base, environmental dimension of quality of life and economic opportunities and policy response. It considers the socio-economic context and reflects the characteristic of growth. Some selected indicators are used for measuring the level of changes the green growth policies have induced in this paper. As results, the CO2 productivity and energy productivity show trends of declination. It means that policy intended industry structure shift for achieving carbon emission target affects weakly in the short-term. Increasing green technologies patents might result from the investment of previous period. The increasing of official development aids which can be immediately embarked by political decision with no time lag present only in 2008-2009. It means international collaboration and investment to developing countries via ODA has not succeeded since the initial stage of his administration. The green growth framework makes the public expect structural change, but it shows sporadic effect. It needs organization to manage it in terms of the long-range perspectives. Energy, climate change and green growth are not the issue to be handled in the one period of the administration. The policy mechanism to transfer cost problem to value creation should be developed consistently.Keywords: comparing ex-ante between ex-post indicator, green growth indicator, implication for green growth policy, measuring policy effect
Procedia PDF Downloads 448130 Language and Power Relations in Selected Political Crisis Speeches in Nigeria: A Critical Discourse Analysis
Authors: Isaiah Ifeanyichukwu Agbo
Abstract:
Human speech is capable of serving many purposes. Power and control are not always exercised overtly by linguistic acts, but maybe enacted and exercised in the myriad of taken-for-granted actions of everyday life. Domination, power control, discrimination and mind control exist in human speech and may lead to asymmetrical power relations. In discourse, there are persuasive and manipulative linguistic acts that serve to establish solidarity and identification with the 'we group' and polarize with the 'they group'. Political discourse is crafted to defend and promote the problematic narrative of outright controversial events in a nation’s history thereby sustaining domination, marginalization, manipulation, inequalities and injustices, often without the dominated and marginalized group being aware of them. They are designed and positioned to serve the political and social needs of the producers. Political crisis speeches in Nigeria, just like in other countries concentrate on positive self-image, de-legitimization of political opponents, reframing accusation to one’s advantage, redefining problematic terms and adopting reversal strategy. In most cases, the people are ignorant of the hidden ideological positions encoded in the text. Few researches have been conducted adopting the frameworks of critical discourse analysis and systemic functional linguistics to investigate this situation in the political crisis speeches in Nigeria. In this paper, we focus attention on the analyses of the linguistic, semantic, and ideological elements in selected political crisis speeches in Nigeria to investigate if they create and sustain unequal power relations and manipulative tendencies from the perspectives of Critical Discourse Analysis (CDA) and Systemic Functional Linguistics (SFL). Critical Discourse Analysis unpacks both opaque and transparent structural relationships of power dominance, power relations and control as manifested in language. Critical discourse analysis emerged from a critical theory of language study which sees the use of language as a form of social practice where social relations are reproduced or contested and different interests are served. Systemic function linguistics relates the structure of texts to their function. Fairclough’s model of CDA and Halliday’s systemic functional approach to language study are adopted in this paper. This paper probes into language use that perpetuates inequalities. This study demystifies the hidden implicature of the selected political crisis speeches and reveals the existence of information that is not made explicit in what the political actors actually say. The analysis further reveals the ideological configurations present in the texts. These ideological standpoints are the basis for naturalizing implicit ideologies and hegemonic influence in the texts. The analyses of the texts further uncovered the linguistic and discursive strategies deployed by text producers to manipulate the unsuspecting members of the public both mentally and conceptually in order to enact, sustain and maintain unhealthy power relations at crisis times in the Nigerian political history.Keywords: critical discourse analysis, language, political crisis, power relations, systemic functional linguistics
Procedia PDF Downloads 342129 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 391128 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector
Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation
Procedia PDF Downloads 138127 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach
Authors: Dominik Kowitzke
Abstract:
Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.Keywords: empty nests, environment, Germany, households, over housing
Procedia PDF Downloads 171126 Alternate Optical Coherence Tomography Technologies in Use for Corneal Diseases Diagnosis in Dogs and Cats
Authors: U. E. Mochalova, A. V. Demeneva, Shilkin A. G., J. Yu. Artiushina
Abstract:
Objective. In medical ophthalmology OCT has been actively used in the last decade. It is a modern non-invasive method of high-precision hardware examination, which gives a detailed cross-sectional image of eye tissues structure with a high level of resolution, which provides in vivo morphological information at the microscopic level about corneal tissue, structures of the anterior segment, retina and optic nerve. The purpose of this study was to explore the possibility of using the OCT technology in complex ophthalmological examination in dogs and cats, to characterize the revealed pathological structural changes in corneal tissue in cats and dogs with some of the most common corneal diseases. Procedures. Optical coherence tomography of the cornea was performed in 112 animals: 68 dogs and 44 cats. In total, 224 eyes were examined. Pathologies of the organ of vision included: dystrophy and degeneration of the cornea, endothelial corneal dystrophy, dry eye syndrome, chronic superficial vascular keratitis, pigmented keratitis, corneal erosion, ulcerative stromal keratitis, corneal sequestration, chronic glaucoma and also postoperative period after performed keratoplasty. When performing OCT, we used certified medical devices: "Huvitz HOCT-1/1F», «Optovue iVue 80» and "SOCT Copernicus Revo (60)". Results. The results of a clinical study on the use of optical coherence tomography (OCT)of the cornea in cats and dogs, performed by the authors of the article in the complex diagnosis of keratopathies of variousorigins: endothelial corneal dystrophy, pigmented keratitis, chronic keratoconjunctivitis, chronic herpetic keratitis, ulcerative keratitis, traumatic corneal damage, sequestration of the cornea of cats, chronic keratitis, complicating the course of glaucoma. The characteristics of the OCT scans are givencorneas of cats and dogs that do not have corneal pathologies. OCT scans of various corneal pathologies in dogs and cats with a description of the revealed pathological changes are presented. Of great clinical interest are the data obtained during OCT of the cornea of animals undergoing keratoplasty operations using various forms of grafts. Conclusions. OCT makes it possible to assess the thickness and pathological structural changes of the corneal surface epithelium, corneal stroma and descemet membrane. We can measure them, determine the exact localization, and record pathological changes. Clinical observation of the dynamics of the pathological process in the cornea using OCT makes it possible to evaluate the effectiveness of drug treatment. In case of negative dynamics of corneal disease, it is necessary to determine the indications for surgical treatment (to assess the thickness of the cornea, the localization of its thinning zones, to characterize the depth and area of pathological changes). According to the OCT of the cornea, it is possible to choose the optimal surgical treatment for the patient, the technique and depth of optically constructive surgery (penetrating or anterior lamellar keratoplasty).; determine the depth and diameter of the planned microsurgical trepanation of corneal tissue, which will ensure good adaptation of the edges of the donor material.Keywords: optical coherence tomography, corneal sequestration, optical coherence tomography of the cornea, corneal transplantation, cat, dog
Procedia PDF Downloads 68125 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing
Procedia PDF Downloads 175124 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 386123 Recycling Biomass of Constructed Wetlands as Precursors of Electrodes for Removing Heavy Metals and Persistent Pollutants
Authors: Álvaro Ramírez Vidal, Martín Muñoz Morales, Francisco Jesús Fernández Morales, Luis Rodríguez Romero, José Villaseñor Camacho, Javier Llanos López
Abstract:
In recent times, environmental problems have led to the extensive use of biological systems to solve them. Among the different types of biological systems, the use of plants such as aquatic macrophytes in constructed wetlands and terrestrial plant species for treating polluted soils and sludge has gained importance. Though the use of constructed wetlands for wastewater treatment is a well-researched domain, the slowness of pollutant degradation and high biomass production pose some challenges. Plants used in CW participate in different mechanisms for the capture and degradation of pollutants that also can retain some pharmaceutical and personal care products (PPCPs) that are very persistent in the environment. Thus, these systems present advantages in line with the guidelines published for the transition towards friendly and ecological procedures as they are environmentally friendly systems, consume low energy, or capture atmospheric CO₂. However, the use of CW presents some drawbacks, as the slowness of pollutant degradation or the production of important amounts of plant biomass, which need to be harvested and managed periodically. Taking this opportunity in mind, it is important to highlight that this residual biomass (of lignocellulosic nature) could be used as the feedstock for the generation of carbonaceous materials using thermochemical transformations such as slow pyrolysis or hydrothermal carbonization to produce high-value biomass-derived carbons through sustainable processes as adsorbents, catalysts…, thereby improving the circular carbon economy. Thus, this work carried out the analysis of some PPCPs commonly found in urban wastewater, as salicylic acid or ibuprofen, to evaluate the remediation carried out for the Phragmites Australis. Then, after the harvesting, this biomass can be used to synthesize electrodes through hydrothermal carbonization (HTC) and produce high-value biomass-derived carbons with electrocatalytic activity to remove heavy metals and persistent pollutants, promoting circular economy concepts. To do this, it was chosen biomass derived from the natural environment in high environmental risk as the Daimiel Wetlands National Park in the center of Spain, and the rest of the biomass developed in a CW specifically designed to remove pollutants. The research emphasizes the impact of the composition of the biomass waste and the synthetic parameters applied during HTC on the electrocatalytic activity. Additionally, this parameter can be related to the physicochemical properties, as porosity, surface functionalization, conductivity, and mass transfer of the electrodes lytic inks. Data revealed that carbon materials synthesized have good surface properties (good conductivities and high specific surface area) that enhance the electro-oxidants generated and promote the removal of PPCPs and the chemical oxygen demand of polluted waters.Keywords: constructed wetlands, carbon materials, heavy metals, pharmaceutical and personal care products, hydrothermal carbonization
Procedia PDF Downloads 94122 Socio-Psychological Significance of Vandalism in the Urban Environment: Destruction, Modernization, Communication
Authors: Olga Kruzhkova, Irina Vorobyeva, Roman Porozov
Abstract:
Vandalism is a common phenomenon, but its definition is still not clearly defined. In the public sense, vandalism is the blatant cases of pogroms in cemeteries, destruction of public places (regardless of whether these actions are authorized), damage to significant objects of culture and history (monuments, religious buildings). From a legal point of view, only such an act can be called vandalism, which is aimed at 'desecrating buildings or other structures, damaging property on public transport or in other public places'. The key here is the notion of public property that is being damaged. In addition, the principal is the semantics of messages, expressed in a kind of sign system (drawing, inscription, symbol), which initially threatens public order, the calmness of citizens, public morality. Because of this, the legal qualification of vandalism doesn’t include a sufficiently wide layer of environmental destructions that are common in modern urban space (graffiti and other damage to private property, broken shop windows, damage to entrances and elevator cabins), which in ordinary consciousness are seen as obvious facts of vandalism. At the same time, the understanding of vandalism from the position of psychology implies an appeal to the question of the limits of the activity of the subject of vandalism and his motivational basis. Also recently, the discourse on the positive meaning of some forms of vandalism (graffiti, street-art, etc.) has been activated. But there is no discussion of the role and significance of vandalism in public and individual life, although, like any socio-cultural and socio-psychological phenomenon, vandalism is not groundless and meaningless. Our aim of the study was to identify and describe the functions of vandalism as a socio-cultural and socio-psychological phenomenon of the life of the urban community, as well as personal determinants of its manifestations. The study was conducted in the spatial environment of the Russian megalopolis (Ekaterinburg) by photographing visual results of vandal acts (6217 photos) with subsequent trace-assessment and image content analysis, as well as diagnostics of personal characteristics and motivational basis of vandal activity of possible subjects of vandalism among youth. The results of the study allowed to identify the functions of vandalism at the socio-environmental and individual-subjective levels. The socio-environmental functions of vandalism include the signaling function, the function of preparing of social changes, the constructing function, and the function of managing public moods. The demonstrative-protest function, the response function, the refund function, and the self-expression function are assigned to the individual-subjective functions of vandalism. A two-dimensional model of vandal functions has been formed, where functions are distributed in the spaces 'construction reconstruction', 'emotional regulation/moral regulation'. It is noted that any function of vandal activity at the individual level becomes a kind of marker of 'points of tension' at the social and environmental level. Acknowledgment: The research was supported financially by Russian Science Foundation, (Project No. 17-18-01278).Keywords: destruction, urban environment, vandal behavior, vandalism, vandalism functions
Procedia PDF Downloads 200