Search results for: lower back
340 Assessment of Heavy Metals Contamination Levels in Groundwater: A Case Study of the Bafia Agricultural Area, Centre Region Cameroon
Authors: Carine Enow-Ayor Tarkang, Victorine Neh Akenji, Dmitri Rouwet, Jodephine Njdma, Andrew Ako Ako, Franco Tassi, Jules Remy Ngoupayou Ndam
Abstract:
Groundwater is the major water resource in the whole of Bafia used for drinking, domestic, poultry and agricultural purposes, and being an area of intense agriculture, there is a great necessity to do a quality assessment. Bafia is one of the main food suppliers in the Centre region of Cameroon, and so to meet their demands, the farmers make use of fertilizers and other agrochemicals to increase their yield. Less than 20% of the population in Bafia has access to piped-borne water due to the national shortage, according to the authors best knowledge very limited studies have been carried out in the area to increase awareness of the groundwater resources. The aim of this study was to assess heavy metal contamination levels in ground and surface waters and to evaluate the effects of agricultural inputs on water quality in the Bafia area. 57 water samples (including 31 wells, 20 boreholes, 4 rivers and 2 springs) were analyzed for their physicochemical parameters, while collected samples were filtered, acidified with HNO3 and analyzed by ICP-MS for their heavy metal content (Fe, Ti, Sr, Al, Mn). Results showed that most of the water samples are acidic to slightly neutral and moderately mineralized. Ti concentration was significantly high in the area (mean value 130µg/L), suggesting another Ti source besides the natural input from Titanium oxides. The high amounts of Mn and Al in some cases also pointed to additional input, probably from fertilizers that are used in the farmlands. Most of the water samples were found to be significantly contaminated with heavy metals exceeding the WHO allowable limits (Ti-94.7%, Al-19.3%, Mn-14%, Fe-5.2% and Sr-3.5% above limits), especially around farmlands and topographic low areas. The heavy metal concentration was evaluated using the heavy metal pollution index (HPI), heavy metal evaluation index (HEI) and degree of contamination (Cd), while the Ficklin diagram was used for the water based on changes in metal content and pH. The high mean values of HPI and Cd (741 and 5, respectively), which exceeded the critical limit, indicate that the water samples are highly contaminated, with intense pollution from Ti, Al and Mn. Based on the HPI and Cd, 93% and 35% of the samples, respectively, are unacceptable for drinking purposes. The lowest HPI value point also had the lowest EC (50 µS/cm), indicating lower mineralization and less anthropogenic influence. According to the Ficklin diagram, 89% of the samples fell within the near-neutral low-metal domain, while 9% fell in the near-neutral extreme-metal domain. Two significant factors were extracted from the PCA, explaining 70.6% of the total variance. The first factor revealed intense anthropogenic activity (especially from fertilizers), while the second factor revealed water-rock interactions. Agricultural activities thus have an impact on the heavy metal content of groundwater in the area; hence, much attention should be given to the affected areas in order to protect human health/life and thus sustainably manage this precious resource.Keywords: Bafia, contamination, degree of contamination, groundwater, heavy metal pollution index
Procedia PDF Downloads 85339 Midterm Clinical and Functional Outcomes After Treatment with Ponseti Method for Idiopathic Clubfeet: A Prospective Cohort Study
Authors: Neeraj Vij, Amber Brennan, Jenni Winters, Hadi Salehi, Hamy Temkit, Emily Andrisevic, Mohan V. Belthur
Abstract:
Idiopathic clubfoot is a common lower extremity deformity with an incidence of 1:500. The Ponseti Method is well known as the gold standard of treatment. However, there is limited functional data demonstrating correction of the clubfoot after treatment with the Ponseti method. The purpose of this study was to study the clinical and functional outcomes after the Ponseti method with the Clubfoot Disease-Specific Instrument (CDS) and pedobarography. This IRB-approved prospective study included patients aged 3-18 who were treated for idiopathic clubfoot with the Ponseti method between January 2008 and December 2018. Age-matched controls were identified through siblings of clubfoot patients and other community members. Treatment details were collected through a chart review of the included patients. Laboratory assessment included a physical exam, gait analysis, and pedobarography. The Pediatric Outcomes Data Collection Instrument and the Clubfoot Disease-Specific Instrument were also obtained on clubfoot patients (CF). The Wilcoxson rank-sum test was used to study differences between the CF patients and the typically developing (TD) patients. Statistical significance was set at p < 0.05. There were a total of 37 enrolled patients in our study. 21 were priorly treated for CF and 16 were TD. 94% of the CF patients had bilateral involvement. The age at the start of treatment was 29 days, the average total number of casts was seven to eight, and the average total number of casts after Achilles tenotomy was one. The reoccurrence rate was 25%, tenotomy was required in 94% of patients, and ≥1 tenotomy was required in 25% of patients. There were no significant differences between step length, step width, stride length, force-time integral, maximum peak pressure, foot progression angles, stance phase time, single-limb support time, double limb support time, and gait cycle time between children treated with the Ponseti method and typically developing children. The average post-treatment Pirani and Dimeglio scores were 5.50±0.58 and 15.29±1.58, respectively. The average post-treatment PODCI subscores were: Upper Extremity: 90.28, Transfers: 94.6, Sports: 86.81, Pain: 86.20, Happiness: 89.52, Global: 88.6. The average post-treatment Clubfoot Disease-Specific Instrument scores subscores were: Satisfaction: 73.93, Function: 80.32, Overall: 78.41. The Ponseti Method has a very high success rate and remains to be the gold standard in the treatment of idiopathic clubfoot. Timely management leads to good outcomes and a low need for repeated Achilles tenotomy. Children treated with the Ponseti method demonstrate good functional outcomes as measured through pedobarography. Pedobarography may have clinical utility in studying congenital foot deformities. Objective measures for hours of brace wear could represent an improvement in clubfoot care.Keywords: functional outcomes, pediatric deformity, patient-reported outcomes, talipes equinovarus
Procedia PDF Downloads 75338 Leptin Levels in Cord Blood and Their Associations with the Birth of Small, Large and Appropriate for Gestational Age Infants in Southern Sri Lanka
Authors: R. P. Hewawasam, M. H. A. D. de Silva, M. A. G. Iresha
Abstract:
In recent years childhood obesity has increased to pan-epidemic proportions along with a concomitant increase in obesity-associated morbidity. Birth weight is an important determinant of later adult health, with neonates at both ends of the birth weight spectrum at risk of future health complications. Consequently, infants who are born large for gestational age (LGA) are more likely to be obese in childhood and adolescence and are at risk of cardiovascular and metabolic complications later in life. Adipose tissue plays a role in linking events in fetal growth to the subsequent development of adult diseases. In addition to its role as a storage depot for fat, adipose tissue produces and secrets a number of hormones of importance in modulating metabolism and energy homeostasis. Cord blood leptin level has been positively correlated with fetal adiposity at birth. It is established that Asians have lower skeletal muscle mass, low bone mineral content and excess body fat for a given body mass index indicating a genetic predisposition in the occurrence of obesity. To our knowledge, studies have never been conducted in Sri Lanka to determine the relationship between adipocytokine profile in cord blood and anthropometric parameters in newborns. Thus, the objective of this study is to establish the above relationship for the Sri Lankan population to implement awareness programs to minimize childhood obesity in the future. Umbilical cord blood was collected from 90 newborns (Male 40, Female 50; gestational age 35-42 weeks) after double clamping the umbilical cord before separation of the placenta and the concentration of leptin was measured by ELISA technique. Anthropometric parameters of the newborn such as birth weight, length, ponderal index, occipital frontal, chest, hip and calf circumferences were measured. Pearson’s correlation was used to assess the relationship between leptin and anthropometric parameters while the Mann-Whitney U test was used to assess the differences in cord blood leptin levels between small for gestational age (SGA), appropriate for gestational age (AGA) and LGA infants. There was a significant difference (P < 0.05) between the cord blood leptin concentrations of LGA infants (12.67 ng/mL ± 2.34) and AGA infants (7.10 ng/mL ± 0.90). However, a significant difference was not observed between leptin levels of SGA infants (8.86 ng/mL ± 0.70) and AGA infants. In both male and female neonates, umbilical leptin levels showed significant positive correlations (P < 0.05) with birth weight of the newborn, pre-pregnancy maternal weight and pre pregnancy BMI between the infants of large and appropriate for gestational ages. Increased concentrations of leptin levels in the cord blood of large for gestational age infants suggest that they may be involved in regulating fetal growth. Leptin concentration of Sri Lankan population was not significantly deviated from published data of Asian populations. Fetal leptin may be an important predictor of neonatal adiposity; however, interventional studies are required to assess its impact on the possible risk of childhood obesity.Keywords: appropriate for gestational age, childhood obesity, leptin, anthropometry
Procedia PDF Downloads 187337 Stigma Impacts the Quality of Life of People Living with Diabetes Mellitus in Switzerland: Challenges for Social Work
Authors: Daniel Gredig, Annabelle Bartelsen-Raemy
Abstract:
Social work services offered to people living with diabetes tend to be moulded by the prevailing understanding that social work is to support people living with diabetes in their adherence to medical prescription and/or life style changes. As diabetes has been conceived as a condition facing no stigma, discrimination of people living with diabetes has not been considered. However, there is growing evidence of stigma. To our knowledge, nevertheless, there have been no comprehensive, in-depth studies of stigma and its impact. Against this background and challenging the present layout of services for people living with diabetes, the present study aimed to establish whether: -people living with diabetes in Switzerland experience stigma, and if so, in what context and to what extent; -experiencing stigma impacts the quality of life of those affected. It was hypothesized that stigma would impact on their quality of life. It was further hypothesized that low self-esteem, psychological distress, depression, and a lack of social support would be mediating factors. For data collection an anonymous paper-and-pencil self-administered questionnaire was used which drew on a qualitative elicitation study. Data were analysed using descriptive statistics and structural equation modelling. To generate a large and diverse convenience sample the questionnaire was distributed to the readers of journal destined to diabetics living in Switzerland issued in German and French. The sample included 3347 people with type 1 and 2 diabetes, aged 16–96, living in diverse living conditions in the German- and French-speaking areas of Switzerland. Respondents reported experiences of discrimination in various contexts and stereotyping based on the belief that diabetics have a low work performance; are inefficient in the workplace; inferior; weak-willed in their ability to manage health-related issues; take advantage of their condition and are viewed as pitiful or sick people. Respondents who reported higher levels of perceived stigma reported higher levels of psychological distress (β = .37), more pronounced depressive symptoms (β=.33), and less social support (β = -.22). Higher psychological distress (β = -.29) and more pronounced depressive symptoms (β = -.28), in turn, predicted lower quality of life. These research findings challenge the prevailing understanding of social work services for people living with diabetes in Switzerland and beyond. They call for a less individualistic approach, the consideration of the social context service users are placed in their everyday life, and addressing stigma. So, social work could partner with people living with diabetes in order to fight against discrimination and stereotypes. This could include identifying and designing educational and public awareness strategies. In direct social work with people living with diabetes, this could include broaching experiences of stigma and modes of coping with. This study was carried out in collaboration with the Swiss Diabetes Association. The association accepted the challenging conclusions from this study. It connected to the results and is currently discussing the priorities and courses of action to be taken.Keywords: diabetes, discrimination, quality of life, services, stigma
Procedia PDF Downloads 227336 Antioxidant Potential of Sunflower Seed Cake Extract in Stabilization of Soybean Oil
Authors: Ivanor Zardo, Fernanda Walper Da Cunha, Júlia Sarkis, Ligia Damasceno Ferreira Marczak
Abstract:
Lipid oxidation is one of the most important deteriorating processes in oil industry, resulting in the losses of nutritional value of oils as well as changes in color, flavor and other physiological properties. Autoxidation of lipids occurs naturally between molecular oxygen and the unsaturation of fatty acids, forming fat-free radicals, peroxide free radicals and hydroperoxides. In order to avoid the lipid oxidation in vegetable oils, synthetic antioxidants such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertiary butyl hydro-quinone (TBHQ) are commonly used. However, the use of synthetic antioxidants has been associated with several health side effects and toxicity. The use of natural antioxidants as stabilizers of vegetable oils is being suggested as a sustainable alternative to synthetic antioxidants. The alternative that has been studied is the use of natural extracts obtained mainly from fruits, vegetables and seeds, which have a well-known antioxidant activity related mainly to the presence of phenolic compounds. The sunflower seed cake is rich in phenolic compounds (1 4% of the total mass), being the chlorogenic acid the major constituent. The aim of this study was to evaluate the in vitro application of the phenolic extract obtained from the sunflower seed cake as a retarder of the lipid oxidation reaction in soybean oil and to compare the results with a synthetic antioxidant. For this, the soybean oil, provided from the industry without any addition of antioxidants, was subjected to an accelerated storage test for 17 days at 65 °C. Six samples with different treatments were submitted to the test: control sample, without any addition of antioxidants; 100 ppm of synthetic antioxidant BHT; mixture of 50 ppm of BHT and 50 ppm of phenolic compounds; and 100, 500 and 1200 ppm of phenolic compounds. The phenolic compounds concentration in the extract was expressed in gallic acid equivalents. To evaluate the oxidative changes of the samples, aliquots were collected after 0, 3, 6, 10 and 17 days and analyzed for the peroxide, diene and triene conjugate values. The soybean oil sample initially had a peroxide content of 2.01 ± 0.27 meq of oxygen/kg of oil. On the third day of the treatment, only the samples treated with 100, 500 and 1200 ppm of phenolic compounds showed a considerable oxidation retard compared to the control sample. On the sixth day of the treatment, the samples presented a considerable increase in the peroxide value (higher than 13.57 meq/kg), and the higher the concentration of phenolic compounds, the lower the peroxide value verified. From the tenth day on, the samples had a very high peroxide value (higher than 55.39 meq/kg), where only the sample containing 1200 ppm of phenolic compounds presented significant oxidation retard. The samples containing the phenolic extract were more efficient to avoid the formation of the primary oxidation products, indicating effectiveness to retard the reaction. Similar results were observed for dienes and trienes. Based on the results, phenolic compounds, especially chlorogenic acid (the major phenolic compound of sunflower seed cake), can be considered as a potential partial or even total substitute for synthetic antioxidants.Keywords: chlorogenic acid, natural antioxidant, vegetables oil deterioration, waste valorization
Procedia PDF Downloads 261335 Preparing Young Adults with Disabilities for Lifelong Inclusivity through a College Level Mentor Program Using Technology: An Exploratory Study
Authors: Jenn Gallup, Onur Kocaoz, Onder Islek
Abstract:
In their pursuit of postsecondary transitions, individuals with disabilities tend to experience, academic, behavioral, and emotional challenges to a greater extent than their typically developing peers. These challenges result in lower rates of graduation, employment, independent living, and participation in college than their peers without disabilities. The lack of friendships and support systems has had a negative impact on those with a disability transitioning to postsecondary settings to include, employment, independent living, and university settings. Establishing friendships and support systems early on is an indicator of potential success and persistence in postsecondary education, employment, and independent living for typically developing college students. It is evident that a deficit in friendships and supports is a key deficit also for individuals with disabilities. To address the specific needs of this group, a mentor program was developed for a transition program held at the university for youth aged 18-21. Pre-service teachers enrolled in the special education program engaged with youth in the transition program in a variety of activities on campus. The mentorship program had two purposes: to assist young adults with disabilities who were transitioning to a workforce setting to help increase social skills, self-advocacy, supports and friendships, and confidence; and to give their peers without disabilities who were enrolled in a secondary special education course as a pre-service teacher the experience of interacting with and forming friendships with peers who had a disability for the purposes of career development. Additionally, according to researchers mobile technology has created a virtual world of equality and opportunity for a large segment of the population that was once marginalized due to physical and cognitive impairments. All of the participants had access to smart phones; therefore, technology was explored during this study to determine if it could be used as a compensatory tool to allow the young adults with disabilities to do things that otherwise would have been difficult because of their disabilities. Additionally, all participants were asked to incorporate technology such as smart phones to communicate beyond the activities, collaborate using virtual platform games which would support and promote social skills, soft-skills, socialization, and relationships. The findings of this study confirmed that a peer mentorship program that harnessed the power of technology supported outcomes specific to young adults with and without disabilities. Mobile technology and virtual game-based platforms, were identified as a significant contributor to personal, academic, and career growth for both groups. The technology encouraged friendships, provided an avenue for rich social interactions, and increased soft-skills. Results will be shared along with the development of the program and potential implications to the field.Keywords: career outcomes, mentorship, soft-skills, technology, transition
Procedia PDF Downloads 167334 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas
Authors: Weijing Zhou, Yuting Lei, Francis Nolan
Abstract:
In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas
Procedia PDF Downloads 94333 Performance of the Abbott RealTime High Risk HPV Assay with SurePath Liquid Based Cytology Specimens from Women with Low Grade Cytological Abnormalities
Authors: Alexandra Sargent, Sarah Ferris, Ioannis Theofanous
Abstract:
The Abbott RealTime High Risk HPV test (RealTime HPV) is one of five assays clinically validated and approved by the English NHS Cervical Screening Programme (CSP) for HPV triage of low grade dyskaryosis and test-of-cure of treated Cervical Intraepithelial Neoplasia. The assay is a highly automated multiplex real-time PCR test for detecting 14 high risk (hr) HPV types, with simultaneous differentiation of HPV 16 and HPV 18 versus non-HPV 16/18 hrHPV. An endogenous internal control ensures sample cellularity, controls extraction efficiency and PCR inhibition. The original cervical specimen collected in SurePath (SP) liquid-based cytology (LBC) medium (BD Diagnostics) and the SP post-gradient cell pellets (SPG) after cytological processing are both CE marked for testing with the RealTime HPV test. During the 2011 NHSCSP validation of new tests only the original aliquot of SP LBC medium was investigated. Residual sample volume left after cytology slide preparation is low and may not always have sufficient volume for repeat HPV testing or for testing of other biomarkers that may be implemented in testing algorithms in the future. The SPG samples, however, have sufficient volumes to carry out additional testing and necessary laboratory validation procedures. This study investigates the correlation of RealTime HPV results of cervical specimens collected in SP LBC medium from women with low grade cytological abnormalities observed with matched pairs of original SP LBC medium and SP post-gradient cell pellets (SPG) after cytology processing. Matched pairs of SP and SPG samples from 750 women with borderline (N = 392) and mild (N = 351) cytology were available for this study. Both specimen types were processed and parallel tested for the presence of hrHPV with RealTime HPV according to the manufacturer´s instructions. HrHPV detection rates and concordance between test results from matched SP and SPGCP pairs were calculated. A total of 743 matched pairs with valid test results on both sample types were available for analysis. An overall-agreement of hrHPV test results of 97.5% (k: 0.95) was found with matched SP/SPG pairs and slightly lower concordance (96.9%; k: 0.94) was observed on 392 pairs from women with borderline cytology compared to 351 pairs from women with mild cytology (98.0%; k: 0.95). Partial typing results were highly concordant in matched SP/SPG pairs for HPV 16 (99.1%), HPV 18 (99.7%) and non-HPV16/18 hrHPV (97.0%), respectively. 19 matched pairs were found with discrepant results: 9 from women with borderline cytology and 4 from women with mild cytology were negative on SPG and positive on SP; 3 from women with borderline cytology and 3 from women with mild cytology were negative on SP and positive on SPG. Excellent correlation of hrHPV DNA test results was found between matched pairs of SP original fluid and post-gradient cell pellets from women with low grade cytological abnormalities tested with the Abbott RealTime High-Risk HPV assay, demonstrating robust performance of the test with both specimen types and reassuring the utility of the assay for cytology triage with both specimen types.Keywords: Abbott realtime test, HPV, SurePath liquid based cytology, surepath post-gradient cell pellet
Procedia PDF Downloads 256332 Efficacy of DAPG Producing Fluorescent Pseudomonas for Enhancing Nutrient Use Efficacy, Bio-Control of Soil-Borne Diseases and Yield of Groundnut
Authors: Basavaraj Yenagi, P. Nagaraju, C. R. Patil
Abstract:
Groundnut (Arachis hypohaea L.) is called as “King of oilseeds” and one of the most important food and cash crops in Indian subcontinent. Yield and quality of oil are negatively correlated with poor or imbalanced nutrition and constant exposure to both biotic and abiotic stress factors. Variety of diseases affect groundnut plant, most of them are caused by fungi and lead to severe yield loss. Imbalanced nutrition increases the concerns of environmental deterioration which includes soil fertility. Among different microbial antagonists, Pseudomonas is common member of the plant growth promoting rhizobacteria microflora present in the rhizosphere of groundnut. These are known to produce a beneficial effect on groundnut due to their high metabolic activity leading to the production of enzymes, exopolysaccharides, secondary metabolites, and antibiotics. The ability of pseudomonas lies on their ability to produce antibiotic metabolites such as 2, 4-diacetylphloroglucinol (DAPG). DAPG can inhibit the growth of fungal pathogens namely collar rot and stem rot and also increase the availability of plant nutrients through increased solubilization and uptake of nutrients. Hence, the present study was conducted for three consecutive years (2014 to 2016) in vertisol during the rainy season to assess the efficacy of DAPG producing fluorescent pseudomonas for enhancing nutrient use efficacy, bio-control of soil-borne diseases and yield of groundnut at University of Agricultural Sciences, Dharwad farm. The experiment was laid out in an RCBD with three replications and seven treatments. The mean of three years data revealed that the effect of DAPG-producing producing fluorescent pseudomonas enhanced groundnut yield, uptake of nitrogen and phosphorus and nutrient use efficiency and also found to be effective in bio-control of collar rot and stem rot incidence leading to increase pod yield of groundnut. Higher dry pod yield of groundnut was obtained with DAPG 2(3535 kg ha-1) closely followed by DAPG 4(3492 kg ha-1), FP 98(3443 kg ha-1), DAPG 1(3414 kg ha-1), FP 86(3361 kg ha-1) and Trichoderma spp. (3380 kg ha-1) over control(3173 kg ha-1). A similar trend was obtained with other growth and yield attributing parameters. N uptake ranged from 8.21 percent to FP 86 to 17.91 percent with DAPG 2 and P uptake ranged between 5.56 percent with FP 86 to 16.67 percent with DAPG 2 over control. The first year, there was no incidence of collar rot. During the second year, the control plot recorded 2.51 percent incidence and it ranged from 0.82 percent to 1.43 percent in different DAPG-producing fluorescent pseudomonas treatments. The similar trend was noticed in the third year with lower incidence. The stem rot incidence was recorded during all the three years. Mean data indicated that the control plot recorded 2.65 percent incidence and it ranged from 0.71 percent to 1.23 percent in different DAPG-producing fluorescent pseudomonas treatments. The increase in net monetary benefits ranged from Rs.5975 ha-1 to Rs.11407 ha 1 in different treatments. Hence, as a low-cost technology, seed treatment with available DAPG-producing fluorescent pseudomonas has a beneficial effect on groundnut for enhancing groundnut yield, nutrient use efficiency and bio-control of soil-borne diseases.Keywords: groundnut, DAPG, fluorescent pseudomonas, nutrient use efficiency, collar rot, stem rot
Procedia PDF Downloads 179331 Assessment of Psychological Needs and Characteristics of Elderly Population for Developing Information and Communication Technology Services
Authors: Seung Ah Lee, Sunghyun Cho, Kyong Mee Chung
Abstract:
Rapid population aging became a worldwide demographic phenomenon due to rising life expectancy and declining fertility rates. Considering the current increasing rate of population aging, it is assumed that Korean society enters into a ‘super-aged’ society in 10 years, in which people aged 65 years or older account for more than 20% of entire population. In line with this trend, ICT services aimed to help elderly people to improve the quality of life have been suggested. However, existing ICT services mainly focus on supporting health or nursing care and are somewhat limited to meet a variety of specialized needs and challenges of this population. It is pointed out that the majority of services have been driven by technology-push policies. Given that the usage of ICT services greatly vary on individuals’ socio-economic status (SES), physical and psychosocial needs, this study systematically categorized elderly population into sub-groups and identified their needs and characteristics related to ICT usage in detail. First, three assessment criteria (demographic variables including SES, cognitive functioning level, and emotional functioning level) were identified based on previous literature, experts’ opinions, and focus group interview. Second, survey questions for needs assessment were developed based on the criteria and administered to 600 respondents from a national probability sample. The questionnaire consisted of 67 items concerning demographic information, experience on ICT services and information technology (IT) devices, quality of life and cognitive functioning, etc. As the result of survey, age (60s, 70s, 80s), education level (college graduates or more, middle and high school, less than primary school) and cognitive functioning level (above the cut-off, below the cut-off) were considered the most relevant factors for categorization and 18 sub-groups were identified. Finally, 18 sub-groups were clustered into 3 groups according to following similarities; computer usage rate, difficulties in using ICT, and familiarity with current or previous job. Group 1 (‘active users’) included those who with high cognitive function and educational level in their 60s and 70s. They showed favorable and familiar attitudes toward ICT services and used the services for ‘joyful life’, ‘intelligent living’ and ‘relationship management’. Group 2 (‘potential users’), ranged from age of 60s to 80s with high level of cognitive function and mostly middle to high school graduates, reported some difficulties in using ICT and their expectations were lower than in group 1 despite they were similar to group 1 in areas of needs. Group 3 (‘limited users’) consisted of people with the lowest education level or cognitive function, and 90% of group reported difficulties in using ICT. However, group 3 did not differ from group 2 regarding the level of expectation for ICT services and their main purpose of using ICT was ‘safe living’. This study developed a systematic needs assessment tool and identified three sub-groups of elderly ICT users based on multi-criteria. It is implied that current cognitive function plays an important role in using ICT and determining needs among the elderly population. Implications and limitations were further discussed.Keywords: elderly population, ICT, needs assessment, population aging
Procedia PDF Downloads 142330 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity
Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido
Abstract:
Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens
Procedia PDF Downloads 287329 Molecular Dynamics Simulation Study of Sulfonated Polybenzimidazole Polymers as Promising Forward Osmosis Membranes
Authors: Seyedeh Pardis Hosseini
Abstract:
With increased levels of clean and affordable water scarcity crises in many countries, wastewater treatment has been chosen as a viable method to produce freshwater for various consumptions. Even though reverse osmosis dominates the wastewater treatment market, forward osmosis (FO) processes have significant advantages, such as potentially using a renewable and low-grade energy source and improving water quality. FO is an osmotically driven membrane process that uses a high concentrated draw solution and a relatively low concentrated feed solution across a semi-permeable membrane. Among many novel FO membranes that have been introduced over the past decades, polybenzimidazole (PBI) membranes, a class of aromatic heterocyclic-based polymers, have shown high thermal and chemical stability because of their unique chemical structure. However, the studies reviewed indicate that the hydrophilicity of PBI membranes is comparatively low. Hence, there is an urgent need to develop novel FO membranes with modified PBI polymers to promote hydrophilicity. A few studies have been undertaken to improve the PBI hydrophilicity by fabricating mixed matrix polymeric membranes and surface modification. Thereby, in this study, two different sulfonated polybenzimidazole (SPBI) polymers with the same backbone but different functional groups, namely arylsulfonate PBI (PBI-AS) and propylsulfonate PBI (PBI-PS), are introduced as FO membranes and studied via the molecular dynamics (MD) simulation method. The FO simulation box consists of three distinct regions: a saltwater region, a membrane region, and a pure-water region. The pure-water region is situated at the upper part of the simulation box, while the saltwater region, which contains an aqueous salt solution of Na+ and Cl− ions along with water molecules, occupies the lower part of the simulation box. Specifically, the saltwater region includes 710 water molecules and 24 Na+ and 24 Cl− ions, resulting in a combined concentration of 10 weight percent (wt%). The pure-water region comprises 788 water molecules. Both the saltwater and pure-water regions have a density of 1.0 g/cm³. The membrane region, positioned between the saltwater and pure-water regions, is constructed from three types of polymers: PBI, PBI-AS, and PBI-PS, each consisting of three polymer chains with 30 monomers per chain. The structural and thermophysical properties of the polymers, water molecules, and Na+ and Cl− ions were analyzed using the COMPASS forcefield. All simulations were conducted using the BIOVIA Materials Studio 2020 software. By monitoring the variation in the number of water molecules over the simulation time within the saltwater region, the water permeability of the polymer membranes was calculated and subsequently compared. The results indicated that SPBI polymers exhibited higher water permeability compared to PBI polymers. This enhanced permeability can be attributed to the structural and compositional differences between SPBI and PBI polymers, which likely facilitate more efficient water transport through the membrane. Consequently, the adoption of SPBI polymers in the FO process is anticipated to result in significantly improved performance. This improvement could lead to higher water flux rates, better salt rejection, and overall more efficient use of resources in desalination and water purification applications.Keywords: forward osmosis, molecular dynamics simulation, sulfonated polybenzimidazole, water permeability
Procedia PDF Downloads 25328 Monitoring of Educational Achievements of Kazakhstani 4th and 9th Graders
Authors: Madina Tynybayeva, Sanya Zhumazhanova, Saltanat Kozhakhmetova, Merey Mussabayeva
Abstract:
One of the leading indicators of the education quality is the level of students’ educational achievements. The processes of modernization of Kazakhstani education system have predetermined the need to improve the national system by assessing the quality of education. The results of assessment greatly contribute to addressing questions about the current state of the educational system in the country. The monitoring of students’ educational achievements (MEAS) is the systematic measurement of the quality of education for compliance with the state obligatory standard of Kazakhstan. This systematic measurement is independent of educational organizations and approved by the order of the Minister of Education and Scienceof Kazakhstan. The MEAS was conducted in the regions of Kazakhstanfor the first time in 2022 by the National Testing Centre. The measurement does not have legal consequences either for students or for educational organizations. Students’ achievements were measured in three subject areas: reading, mathematics and science literacy. MEAS was held for the first time in April this year, 105 thousand students from 1436 schools of Kazakhstan took part in the testing. The monitoring was accompanied by a survey of students, teachers, and school leaders. The goal is to identify which contextual factors affect learning outcomes. The testing was carried out in a computer format. The test tasks of MEAS are ranked according to the three levels of difficulty: basic, medium, and high. Fourth graders are asked to complete 30 closed-type tasks. The average score of the results is 21 points out of 30, which means 70% of tasks were successfully completed. The total number of test tasks for 9th grade students – 75 questions. The results of ninth graders are comparatively lower, the success rate of completing tasks is 63%. MEAS participants did not reveal a statistically significant gap in results in terms of the language of instruction, territorial status, and type of school. The trend of reducing the gap in these indicators is also noted in the framework of recent international studies conducted across the country, in particular PISA for schools in Kazakhstan. However, there is a regional gap in MOES performance. The difference in the values of the indicators of the highest and lowest scores of the regions was 11% of the success of completing tasks in the 4th grade, 14% in the 9thgrade. The results of the 4th grade students in reading, mathematics, and science literacy are: 71.5%, 70%, and 66.9%, respectively. The results of ninth-graders in reading, mathematics, and science literacy are 69.6%, 54%, and 60.8%, respectively. From the surveys, it was revealed that the educational achievements of students are considerably influenced by such factors as the subject competences of teachers, as well as the school climate and motivation of students. Thus, the results of MEAS indicate the need for an integrated approach to improving the quality of education. In particular, the combination of improving the content of curricula and textbooks, internal and external assessment of the educational achievements of students, educational programs of pedagogical specialties, and advanced training courses is required.Keywords: assessment, secondary school, monitoring, functional literacy, kazakhstan
Procedia PDF Downloads 104327 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 189326 Flexural Performance of the Sandwich Structures Having Aluminum Foam Core with Different Thicknesses
Authors: Emre Kara, Ahmet Fatih Geylan, Kadir Koç, Şura Karakuzu, Metehan Demir, Halil Aykul
Abstract:
The structures obtained with the use of sandwich technologies combine low weight with high energy absorbing capacity and load carrying capacity. Hence, there is a growing and markedly interest in the use of sandwiches with aluminium foam core because of very good properties such as flexural rigidity and energy absorption capability. The static (bending and penetration) and dynamic (dynamic bending and low velocity impact) tests were already performed on the aluminum foam cored sandwiches with different types of outer skins by some of the authors. In the current investigation, the static three-point bending tests were carried out on the sandwiches with aluminum foam core and glass fiber reinforced polymer (GFRP) skins at different values of support span distances (L= 55, 70, 80, 125 mm) aiming the analyses of their flexural performance. The influence of the core thickness and the GFRP skin type was reported in terms of peak load, energy absorption capacity and energy efficiency. For this purpose, the skins with two different types of fabrics ([0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.5 mm) and the aluminum foam core with two different thicknesses (h=10 and 15 mm) were bonded with a commercial polyurethane based flexible adhesive in order to combine the composite sandwich panels. The GFRP skins fabricated via Vacuum Assisted Resin Transfer Molding (VARTM) technique used in the study can be easily bonded to the aluminum foam core and it is possible to configure the base materials (skin, adhesive and core), fiber angle orientation and number of layers for a specific application. The main results of the bending tests are: force-displacement curves, peak force values, absorbed energy, energy efficiency, collapse mechanisms and the effect of the support span length and core thickness. The results of the experimental study showed that the sandwich with the skins made of S-Glass Woven fabrics and with the thicker foam core presented higher mechanical values such as load carrying and energy absorption capacities. The increment of the support span distance generated the decrease of the mechanical values for each type of panels, as expected, because of the inverse proportion between the force and span length. The most common failure types of the sandwiches are debonding of the upper or lower skin and the core shear. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry (automotive, aerospace, shipbuilding and marine industry), where the problems of collision and crash have increased in the last years.Keywords: aluminum foam, composite panel, flexure, transport application
Procedia PDF Downloads 337325 Mobile Phones, (Dis) Empowerment and Female Headed Households: Trincomalee, Sri Lanka
Authors: S. A. Abeykoon
Abstract:
This study explores the empowerment potential of the mobile phone, the widely penetrated and greatly affordable communication technology in Sri Lanka, for female heads of households in Trincomalee District, Sri Lanka-an area recovering from the effects of a 30-year civil war and the 2004 Boxing Day Tsunami. It also investigates how the use of mobile phones by these women is shaped and appropriated by the gendered power relations and inequalities in their respective communities and by their socio-economic factors and demographic characteristics. This qualitative study is based on the epistemology of constructionism; interpretivist, functionalist and critical theory approaches; and the process of action research. The data collection was conducted from September 2014 to November 2014 in two Divisional Secretaries of the Trincomalee District, Sri Lanka. A total of 30 semi-structured depth interviews and six focus groups with the female heads of households of Sinhalese, Tamil and Muslim ethnicities were conducted using purposive, representative and snowball sampling methods. The Grounded theory method was used to analyze transcribed interviews, focus group discussions and field notes that were coded and categorized in accordance with the research questions and the theoretical framework of the study. The findings of the study indicated that the mobile phone has mainly enabled the participants to balance their income earning activities and family responsibilities and has been useful in maintaining their family and social relationships, occupational duties and in making decisions. Thus, it provided them a higher level of security, safety, reassurance and self-confidence in carrying out their daily activities. They also practiced innovative strategies for the effective and efficient use of their mobile expenses. Although participants whose husbands or relatives have migrated were more tended to use smart phones, mobile literacy level of the majority of the participants was at a lower level limited to making and receiving calls and using SMS (Short Message Service) services. However, their interaction with the mobile phone was significantly shaped by the gendered power relations and their multiple identities based on their ethnicity, religion, class, education, profession and age. Almost all the participants were precautious of giving their mobile numbers to and have been harassed with ‘nuisance calls’ from men. For many, ownership and use of their mobile phone was shaped and influenced by their children and migrated husbands. Although these practices limit their use of the technology, there were many instances that they challenged these gendered harassments. While man-made and natural destructions have disempowered and victimized the women in the Sri Lankan society, they have also liberated women making them stronger and transforming their agency and traditional gender roles. Therefore, their present position in society is reflected in their mobile phone use as they assist such women to be more self-reliant and liberated, yet making them disempowered at some time.Keywords: mobile phone, gender power relations, empowerment, female heads of households
Procedia PDF Downloads 336324 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival
Authors: Ichiro Takahashi
Abstract:
One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability
Procedia PDF Downloads 162323 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States
Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala
Abstract:
Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work
Procedia PDF Downloads 171322 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules
Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury
Abstract:
Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies
Procedia PDF Downloads 119321 Swedish–Nigerian Extrusion Research: Channel for Traditional Grain Value Addition
Authors: Kalep Filli, Sophia Wassén, Annika Krona, Mats Stading
Abstract:
Food security challenge and the growing population in Sub-Saharan Africa centers on its agricultural transformation, where about 70% of its population is directly involved in farming. Research input can create economic opportunities, reduce malnutrition and poverty, and generate faster, fairer growth. Africa is discarding $4 billion worth of grain annually due to pre and post-harvest losses. Grains and tubers play a central role in food supply in the region but their production has generally lagged behind because no robust scientific input to meet up with the challenge. The African grains are still chronically underutilized to the detriment of the well-being of the people of Africa and elsewhere. The major reason for their underutilization is because they are under-researched. Any commitment by scientific community to intervene needs creative solutions focused on innovative approaches that will meet the economic growth. In order to mitigate this hurdle, co-creation activities and initiatives are necessary.An example of such initiatives has been initiated through Modibbo Adama University of Technology Yola, Nigeria and RISE (The Research Institutes of Sweden) Gothenburg, Sweden. Exchange of expertise in research activities as a possibility to create channel for value addition to agricultural commodities in the region under the ´Traditional Grain Network programme´ is in place. Process technologies, such as extrusion offers the possibility of creating products in the food and feed sectors, with better storage stability, added value, lower transportation cost and new markets. The Swedish–Nigerian initiative has focused on the development of high protein pasta. Dry microscopy of pasta sample result shows a continuous structural framework of proteins and starch matrix. The water absorption index (WAI) results showed that water was absorbed steadily and followed the master curve pattern. The WAI values ranged between 250 – 300%. In all aspect, the water absorption history was within a narrow range for all the eight samples. The total cooking time for all the eight samples in our study ranged between 5 – 6 minutes with their respective dry sample diameter ranging between 1.26 – 1.35 mm. The percentage water solubility index (WSI) ranged from 6.03 – 6.50% which was within a narrow range and the cooking loss which is a measure of WSI is considered as one of the main parameters taken into consideration during the assessment of pasta quality. The protein contents of the samples ranged between 17.33 – 18.60 %. The value of the cooked pasta firmness ranged from 0.28 - 0.86 N. The result shows that increase in ratio of cowpea flour and level of pregelatinized cowpea tends to increase the firmness of the pasta. The breaking strength represent index of toughness of the dry pasta ranged and it ranged from 12.9 - 16.5 MPa.Keywords: cowpea, extrusion, gluten free, high protein, pasta, sorghum
Procedia PDF Downloads 193320 Multi-Agent System Based Distributed Voltage Control in Distribution Systems
Authors: A. Arshad, M. Lehtonen. M. Humayun
Abstract:
With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids
Procedia PDF Downloads 310319 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China
Authors: Jiujie Cai, Fengxia LI, Haibo Wang
Abstract:
After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment
Procedia PDF Downloads 125318 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal
Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi
Abstract:
The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.Keywords: low income, subsidy policy, distributed energy resources, energy justice
Procedia PDF Downloads 111317 Accelerating Personalization Using Digital Tools to Drive Circular Fashion
Authors: Shamini Dhana, G. Subrahmanya VRK Rao
Abstract:
The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.Keywords: circular fashion, deep learning, digital technology platform, personalization
Procedia PDF Downloads 62316 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land
Authors: Elissandro Voigt Beier, Cristiano Poleto
Abstract:
The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land
Procedia PDF Downloads 276315 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 524314 Cross-Country Mitigation Policies and Cross Border Emission Taxes
Authors: Massimo Ferrari, Maria Sole Pagliari
Abstract:
Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.Keywords: climate change, general equilibrium, optimal taxation, monetary policy
Procedia PDF Downloads 157313 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation
Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony
Abstract:
Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.Keywords: architecture, computation, evolution, standard deviation, urban
Procedia PDF Downloads 131312 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 78311 Treatment of Wastewater by Constructed Wetland Eco-Technology: Plant Species Alters the Performance and the Enrichment of Bacteria Ries Alters the Performance and the Enrichment of Bacteria
Authors: Kraiem Khadija, Hamadi Kallali, Naceur Jedidi
Abstract:
Constructed wetland systems are eco-technology recognized as environmentally friendly and emerging innovative solutions remediation as these systems are cost-effective and sustainable wastewater treatment systems. The performance of these biological system is affected by various factors such as plant, substrate, wastewater type, hydraulic loading rate, hydraulic retention time, water depth, and operation mood. The objective of this study was to to assess the alters of plant species on pollutants reduction and enrichment of anammox and nitrifing denitrifing bacteria in a modified vertical flow (VFCW) constructed wetland. This tests were carried out using three modified vertical constructed wetlands with a surface of 0.23 m² and depth 80 cm. It was a saturated vertical constructed wetland at the bottom. The saturation zone is maintained by the siphon structure at the outlet. The VFCW (₁) system was unplanted, VFCW (₂) planted with Typha angustofolia, and VFCW(₃) planted with Phragmites australis. The experimental units were fed with domestic wastewater and were operated by batch mode during 8 months at an average hydraulic loading rate around 20 cm day− 1. The operation cycle was two days feeding and five days rest. Results indicated that plants presence improved the removal efficiency; the removal rates of organic matter (85.1–90.9%; COD and 81.8–88.9%; BOD5), nitrogen (54.2–73%; NTK and 66–77%; NH4 -N) were higher by 10.7–30.1% compared to the unplanted vertical constructed wetland. On the other hand, the plant species had no significant effect on removal efficiency of COD, The removal of COD was similar in VFCW (₂) and VFCW (₃) (p > 0.05), attaining average removal efficiencies of 88.7% and 85.2%, respectively. Whereas it had a significant effect on NTK removal (p > 0.05), with an average removal rate of 72% versus 51% for VFCW (₂) and VFCW (₃), respectively. Among the three sets of vertical flow constructed wetlands, the VFCW(₂) removed the highest percent of total streptococcus, fecal streptococcus total coliforms, fecal coliforms, E. coli as 59, 62, 52, 63, and 58%, respectively. The presence and the plant species alters the community composition and abundance of the bacteria. The abundance of bacteria in the planted wetland was much higher than that in the unplanted one. VFCW(₃) had the highest relative abundance of nitrifying bacteria such as Nitrosospira (18%), Nitrosospira (12%), and Nitrobacter (8%). Whereas the vertical constructed wetland planted with typha had larger number of denitrifying species, with relative abundances of Aeromonas (13%), Paracoccus (11%), Thauera (7%), and Thiobacillus (6%). However, the abundance of nitrifying bacteria was very lower in this system than VFCW(₂). Interestingly, the presence of Thypha angustofolia species favored the enrichment of anammox bacteria compared to unplanted system and system planted with phragmites australis. The results showed that the middle layer had the most accumulation of anammox bacteria, which the anaerobic condition is better and the root system is moderate. Vegetation has several characteristics that make it an essential component of wetlands, but its exact effects are complex and debated.Keywords: wastawater, constructed wetland, anammox, removal
Procedia PDF Downloads 102