Search results for: order picking process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25898

Search results for: order picking process

1208 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City

Authors: Berhanu Keno Terfa

Abstract:

Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.

Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.

Procedia PDF Downloads 37
1207 The Effect of Manure Loaded Biochar on Soil Microbial Communities

Authors: T. Weber, D. MacKenzie

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.

Keywords: cattle, biochar, manure, microbial activity

Procedia PDF Downloads 103
1206 Antioxidant Potential of Sunflower Seed Cake Extract in Stabilization of Soybean Oil

Authors: Ivanor Zardo, Fernanda Walper Da Cunha, Júlia Sarkis, Ligia Damasceno Ferreira Marczak

Abstract:

Lipid oxidation is one of the most important deteriorating processes in oil industry, resulting in the losses of nutritional value of oils as well as changes in color, flavor and other physiological properties. Autoxidation of lipids occurs naturally between molecular oxygen and the unsaturation of fatty acids, forming fat-free radicals, peroxide free radicals and hydroperoxides. In order to avoid the lipid oxidation in vegetable oils, synthetic antioxidants such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertiary butyl hydro-quinone (TBHQ) are commonly used. However, the use of synthetic antioxidants has been associated with several health side effects and toxicity. The use of natural antioxidants as stabilizers of vegetable oils is being suggested as a sustainable alternative to synthetic antioxidants. The alternative that has been studied is the use of natural extracts obtained mainly from fruits, vegetables and seeds, which have a well-known antioxidant activity related mainly to the presence of phenolic compounds. The sunflower seed cake is rich in phenolic compounds (1 4% of the total mass), being the chlorogenic acid the major constituent. The aim of this study was to evaluate the in vitro application of the phenolic extract obtained from the sunflower seed cake as a retarder of the lipid oxidation reaction in soybean oil and to compare the results with a synthetic antioxidant. For this, the soybean oil, provided from the industry without any addition of antioxidants, was subjected to an accelerated storage test for 17 days at 65 °C. Six samples with different treatments were submitted to the test: control sample, without any addition of antioxidants; 100 ppm of synthetic antioxidant BHT; mixture of 50 ppm of BHT and 50 ppm of phenolic compounds; and 100, 500 and 1200 ppm of phenolic compounds. The phenolic compounds concentration in the extract was expressed in gallic acid equivalents. To evaluate the oxidative changes of the samples, aliquots were collected after 0, 3, 6, 10 and 17 days and analyzed for the peroxide, diene and triene conjugate values. The soybean oil sample initially had a peroxide content of 2.01 ± 0.27 meq of oxygen/kg of oil. On the third day of the treatment, only the samples treated with 100, 500 and 1200 ppm of phenolic compounds showed a considerable oxidation retard compared to the control sample. On the sixth day of the treatment, the samples presented a considerable increase in the peroxide value (higher than 13.57 meq/kg), and the higher the concentration of phenolic compounds, the lower the peroxide value verified. From the tenth day on, the samples had a very high peroxide value (higher than 55.39 meq/kg), where only the sample containing 1200 ppm of phenolic compounds presented significant oxidation retard. The samples containing the phenolic extract were more efficient to avoid the formation of the primary oxidation products, indicating effectiveness to retard the reaction. Similar results were observed for dienes and trienes. Based on the results, phenolic compounds, especially chlorogenic acid (the major phenolic compound of sunflower seed cake), can be considered as a potential partial or even total substitute for synthetic antioxidants.

Keywords: chlorogenic acid, natural antioxidant, vegetables oil deterioration, waste valorization

Procedia PDF Downloads 264
1205 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium

Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas

Abstract:

Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.

Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides

Procedia PDF Downloads 435
1204 The Complementary Effect of Internal Control System and Whistleblowing Policy on Prevention and Detection of Fraud in Nigerian Deposit Money Banks

Authors: Dada Durojaye Joshua

Abstract:

The study examined the combined effect of internal control system and whistle blowing policy while it pursues the following specific objectives, which are to: examine the relationship between monitoring activities and fraud’s detection and prevention; investigate the effect of control activities on fraud’s detection and prevention in Nigerian Deposit Money Banks (DMBs). The population of the study comprises the 89,275 members of staff in the 20 DMBs in Nigeria as at June 2019. Purposive and convenient sampling techniques were used in the selection of the 80 members of staff at the supervisory level of the Internal Audit Departments of the head offices of the sampled banks, that is, selecting 4 respondents (Audit Executive/Head, Internal Control; Manager, Operation Risk Management; Head, Financial Crime Control; the Chief Compliance Officer) from each of the 20 DMBs in Nigeria. A standard questionnaire was adapted from 2017/2018 Internal Control Questionnaire and Assessment, Bureau of Financial Monitoring and Accountability Florida Department of Economic Opportunity. It was modified to serve the purpose for which it was meant to serve. It was self-administered to gather data from the 80 respondents at the respective headquarters of the sampled banks at their respective locations across Nigeria. Two likert-scales was used in achieving the stated objectives. A logit regression was used in analysing the stated hypotheses. It was found that effect of monitoring activities using the construct of conduct of ongoing or separate evaluation (COSE), evaluation and communication of deficiencies (ECD) revealed that monitoring activities is significant and positively related to fraud’s detection and prevention in Nigerian DMBS. So also, it was found that control activities using selection and development of control activities (SDCA), selection and development of general controls over technology to prevent financial fraud (SDGCTF), development of control activities that gives room for transparency through procedures that put policies into actions (DCATPPA) contributed to influence fraud detection and prevention in the Nigerian DMBs. In addition, it was found that transparency, accountability, reliability, independence and value relevance have significant effect on fraud detection and prevention ibn Nigerian DMBs. The study concluded that the board of directors demonstrated independence from management and exercises oversight of the development and performance of internal control. Part of the conclusion was that there was accountability on the part of the owners and preparers of the financial reports and that the system gives room for the members of staff to account for their responsibilities. Among the recommendations was that the management of Nigerian DMBs should create and establish a standard Internal Control System strong enough to deter fraud in order to encourage continuity of operations by ensuring liquidity, solvency and going concern of the banks. It was also recommended that the banks create a structure that encourages whistleblowing to complement the internal control system.

Keywords: internal control, whistleblowing, deposit money banks, fraud prevention, fraud detection

Procedia PDF Downloads 80
1203 Roads and Agriculture: Impacts of Connectivity in Peru

Authors: Julio Aguirre, Yohnny Campana, Elmer Guerrero, Daniel De La Torre Ugarte

Abstract:

A well-developed transportation network is a necessary condition for a country to derive full benefits from good trade and macroeconomic policies. Road infrastructure plays a key role in the economic development of rural areas of developing countries; where agriculture is the main economic activity. The ability to move agricultural production from the place of production to the market, and then to the place of consumption, greatly influence the economic value of farming activities, and of the resources involved in the production process, i.e., labor and land. Consequently, investment in transportation networks contributes to enhance or overcome the natural advantages or disadvantages that topography and location have imposed over the agricultural sector. This is of particular importance when dealing with countries, like Peru, with a great topographic diversity. The objective of this research is to estimate the impacts of road infrastructure on the performance of the agricultural sector. Specific variables of interest are changes in travel time, shifts of production for self-consumption to production for the market, changes in farmers income, and impacts on the diversification of the agricultural sector. In the study, a cross-section model with instrumental variables is the central methodological instrument. The data is obtained from agricultural and transport geo-referenced databases, and the instrumental variable specification utilized is based on the Kruskal algorithm. The results show that the expansion of road connectivity reduced farmers' travel time by an average of 3.1 hours and the proportion of output sold in the market increases by up to 40 percentage points. The increase in connectivity has an unexpected increase in the districts index of diversification of agricultural production. The results are robust to the inclusion of year and region fixed-effects, and to control for geography (i.e., slope and altitude), population variables, and mining activity. Other results are also very eloquent. For example, a clear positive impact can be seen in access to local markets, but this does not necessarily correlate with an increase in the production of the sector. This can be explained by the fact that agricultural development not only requires provision of roads but additional complementary infrastructure and investments intended to provide the necessary conditions so that producers can offer quality products (improved management practices, timely maintenance of irrigation infrastructure, transparent management of water rights, among other factors). Therefore, complementary public goods are needed to enhance the effects of roads on the welfare of the population, beyond enabling them to increase their access to markets.

Keywords: agriculture devolepment, market access, road connectivity, regional development

Procedia PDF Downloads 206
1202 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 205
1201 Examining the Usefulness of an ESP Textbook for Information Technology: Learner Perspectives

Authors: Yun-Husan Huang

Abstract:

Many English for Specific Purposes (ESP) textbooks are distributed globally as the content development is often obliged to compromises between commercial and pedagogical demands. Therefore, the issue of regional application and usefulness of globally published ESP textbooks has received much debate. For ESP instructors, textbook selection is definitely a priority consideration for curriculum design. An appropriate ESP textbook can facilitate teaching and learning, while an inappropriate one may cause a disaster for both teachers and students. This study aims to investigate the regional application and usefulness of an ESP textbook for information technology (IT). Participants were 51 sophomores majoring in Applied Informatics and Multimedia at a university in Taiwan. As they were non-English majors, their English proficiency was mostly at elementary and elementary-to-intermediate levels. This course was offered for two semesters. The textbook selected was Oxford English for Information Technology. At class end, the students were required to complete a survey comprising five choices of Very Easy, Easy, Neutral, Difficult, and Very Difficult for each item. Based on the content design of the textbook, the survey investigated how the students viewed the difficulty of grammar, listening, speaking, reading, and writing materials of the textbook. In terms of difficulty, results reveal that only 22% of them found the grammar section difficult and very difficult. For listening, 71% responded difficult and very difficult. For general reading, 55% responded difficult and very difficult. For speaking, 56% responded difficult and very difficult. For writing, 78% responded difficult and very difficult. For advanced reading, 90% reported difficult and very difficult. These results indicate that, except the grammar section, more than half of the students found the textbook contents difficult in terms of listening, speaking, reading, and writing materials. Such contradictory results between the easy grammar section and the difficult four language skills sections imply that the textbook designers do not well understand the English learning background of regional ESP learners. For the participants, the learning contents of the grammar section were the general grammar level of junior high school, while the learning contents of the four language skills sections were more of the levels of college English majors. Implications from the findings are obtained for instructors and textbook designers. First of all, existing ESP textbooks for IT are few and thus textbook selections for instructors are insufficient. Second, existing globally published textbooks for IT cannot be applied to learners of all English proficiency levels, especially the low level. With limited textbook selections, third, instructors should modify the selected textbook contents or supplement extra ESP materials to meet the proficiency level of target learners. Fourth, local ESP publishers should collaborate with local ESP instructors who understand best the learning background of their students in order to develop appropriate ESP textbooks for local learners. Even though the instructor reduced learning contents and simplified tests in curriculum design, in conclusion, the students still found difficult. This implies that in addition to the instructor’s professional experience, there is a need to understand the usefulness of the textbook from learner perspectives.

Keywords: ESP textbooks, ESP materials, ESP textbook design, learner perspectives on ESP textbooks

Procedia PDF Downloads 340
1200 The Effects of Goal Setting and Feedback on Inhibitory Performance

Authors: Mami Miyasaka, Kaichi Yanaoka

Abstract:

Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.

Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control

Procedia PDF Downloads 104
1199 Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Sources

Authors: Annisa Ulfah Pristya, Andi Setiawan

Abstract:

Electricity is the primary requirement today's world, including Indonesia. This is because electricity is a source of electrical energy that is flexible to use. Fossil energy sources are the major energy source that is used as a source of energy power plants. Unfortunately, this conversion process impacts on the depletion of fossil fuel reserves and causes an increase in the amount of CO2 in the atmosphere, disrupting health, ozone depletion, and the greenhouse effect. Solutions have been applied are solar cells, ocean wave power, the wind, water, and so forth. However, low efficiency and complicated treatment led to most people and industry in Indonesia still using fossil fuels. Referring to this Fuel Cell was developed. Fuel Cells are electrochemical technology that continuously converts chemical energy into electrical energy for the fuel and oxidizer are the efficiency is considerably higher than the previous natural source of electrical energy, which is 40-60%. However, Fuel Cells still have some weaknesses in terms of the use of an expensive platinum catalyst which is limited and not environmentally friendly. Because of it, required the simultaneous source of electrical energy and environmentally friendly. On the other hand, Indonesia is a rich country in marine sediments and organic content that is never exhausted. Stacking the organic component can be an alternative energy source continued development of fuel cell is A Microbial Fuel Cell. Microbial Fuel Cells (MFC) is a tool that uses bacteria to generate electricity from organic and non-organic compounds. MFC same tools as usual fuel cell composed of an anode, cathode and electrolyte. Its main advantage is the catalyst in the microbial fuel cell is a microorganism and working conditions carried out in neutral solution, low temperatures, and environmentally friendly than previous fuel cells (Chemistry Fuel Cell). However, when compared to Chemistry Fuel Cell, MFC only have an efficiency of 40%. Therefore, the authors provide a solution in the form of Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Source. Nano-MFC has the advantage of an effective, high efficiency, cheap and environmental friendly. Related stakeholders that helped are government ministers, especially Energy Minister, the Institute for Research, as well as the industry as a production executive facilitator. strategic steps undertaken to achieve that begin from conduct preliminary research, then lab scale testing, and dissemination and build cooperation with related parties (MOU), conduct last research and its applications in the field, then do the licensing and production of Nano-MFC on an industrial scale and publications to the public.

Keywords: CNT, efficiency, electric, microorganisms, sediment

Procedia PDF Downloads 409
1198 The Valuable Triad of Adipokine Indices to Differentiate Pediatric Obesity from Metabolic Syndrome: Chemerin, Progranulin, Vaspin

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is associated with cardiovascular disease risk factors and metabolic syndrome (MetS). In this study, associations between adipokines and adipokine as well as obesity indices were evaluated. Plasma adipokine levels may exhibit variations according to body adipose tissue mass. Besides, upon consideration of obesity as an inflammatory disease, adipokines may play some roles in this process. The ratios of proinflammatory adipokines to adiponectin may act as highly sensitive indicators of body adipokine status. The aim of the study is to present some adipokine indices, which are thought to be helpful for the evaluation of childhood obesity and also to determine the best discriminators in the diagnosis of MetS. 80 prepubertal children (aged between 6-9.5 years) included in the study were divided into three groups; 30 children with normal weight (NW), 25 morbid obese (MO) children and 25 MO children with MetS. Physical examinations were performed. Written informed consent forms were obtained from the parents. The study protocol was approved by Ethics Committee of Namik Kemal University Medical Faculty. Anthropometric measurements, such as weight, height, waist circumference (C), hip C, head C, neck C were recorded. Values for body mass index (BMI), diagnostic obesity notation model assessment Index-II (D2 index) as well as waist-to-hip, head-to-neck ratios were calculated. Adiponectin, resistin, leptin, chemerin, vaspin, progranulin assays were performed by ELISA. Adipokine-to-adiponectin ratios were obtained. SPSS Version 20 was used for the evaluation of data. p values ≤ 0.05 were accepted as statistically significant. Values of BMI and D2 index, waist-to-hip, head-to-neck ratios did not differ between MO and MetS groups (p ≥ 0.05). Except progranulin (p ≤ 0.01), similar patterns were observed for plasma levels of each adipokine. There was not any difference in vaspin as well as resistin levels between NW and MO groups. Significantly increased leptin-to-adiponectin, chemerin-to-adiponectin and vaspin-to-adiponectin values were noted in MO in comparison with those of NW. The most valuable adipokine index was progranulin-to-adiponectin (p ≤ 0.01). This index was strongly correlated with vaspin-to-adiponectin ratio in all groups (p ≤ 0.05). There was no correlation between vaspin-to-adiponectin and chemerin-to--adiponectin in NW group. However, a correlation existed in MO group (r = 0.486; p ≤ 0.05). Much stronger correlation (r = 0.609; p ≤ 0.01) was observed in MetS group between these two adipokine indices. No correlations were detected between vaspin and progranulin as well as vaspin and chemerin levels. Correlation analyses showed a unique profile confined to MetS children. Adiponectin was found to be correlated with waist-to-hip (r = -0.435; p ≤ 0.05) as well as head-to-neck (r = 0.541; p ≤ 0.05) ratios only in MetS children. In this study, it has been investigated if adipokine indices have priority over adipokine levels. In conclusion, vaspin-to-adiponectin, progranulin-to-adiponectin, chemerin-to-adiponectin along with waist-to-hip and head-to-neck ratios were the optimal combinations. Adiponectin, waist-to-hip, head-to-neck, vaspin-to-adiponectin, chemerin-to-adiponectin ratios had appropriate discriminatory capability for MetS children.

Keywords: adipokine indices, metabolic syndrome, obesity indices, pediatric obesity

Procedia PDF Downloads 205
1197 Using True Life Situations in a Systems Theory Perspective as Sources of Creativity: A Case Study of how to use Everyday Happenings to produce Creative Outcomes in Novel and Screenplay Writing

Authors: Rune Bjerke

Abstract:

Psychologists incline to see creativity as a mental and psychological process. However, creativity is as well results of cultural and social interactions. Therefore, creativity is not a product of individuals in isolation, but of social systems. Creative people get ideas from the influence of others and the immediate cultural environment – a space of knowledge, situations, and practices. Therefore, in this study we apply the systems theory in practice to activate creativity processes in the production of our novel and screenplay writing. We, as storytellers actively seek to get into situations in our everyday lives, our systems, to generate ideas. Within our personal systems, we have the potential to induce situations to realise ideas to our texts, which may be accepted by our gate-keepers and can become socially validated. This is our method of writing – get into situations, get ideas to texts, and test them with family and friends in our social systems. Example of novel text as an outcome of our method is as follows: “Is it a matter of obviousness or had I read it somewhere, that the one who increases his knowledge increases his pain? And also, the other way around, with increased pain, knowledge increases, I thought. Perhaps such a chain of effects explains why the rebel August Strindberg wrote seven plays in ten months after the divorce with Siri von Essen. Shortly after, he tried painting. Neither the seven theatre plays were shown, nor the paintings were exhibited. I was standing in front of Munch's painting Women in Three Stages with chaotic mental images of myself crumpled in a church and a laughing x-girlfriend watching my suffering. My stomach was turning at unpredictable intervals and the subsequent vomiting almost suffocated me. Love grief at the worst. Was it this pain Strindberg felt? Despite the failure of his first plays, the pain must have triggered a form of creative energy that turned pain into ideas. Suffering, thoughts, feelings, words, text, and then, the reader experience. Maybe this negative force can be transformed into something positive, I asked myself. The question eased my pain. At that moment, I forgot the damp, humid air in the Munch Museum. Is it the similar type of Strindberg-pain that could explain the recurring, depressive themes in Munch's paintings? Illness, death, love and jealousy. As a beginning art student at the master's level, I had decided to find the answer. Was it the same with Munch's pain, as with Strindberg - a woman behind? There had to be women in the case of Munch - therefore, the painting “Women in Three Stages”? Who are they, what personality types are they – the women in red, black and white dresses from left to the right?” We, the writers, are using persons, situations and elements in our systems, in a systems theory perspective, to prompt creative ideas. A conceptual model is provided to advance creativity theory.

Keywords: creativity theory, systems theory, novel writing, screenplay writing, sources of creativity in social systems

Procedia PDF Downloads 120
1196 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach

Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal

Abstract:

Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.

Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol

Procedia PDF Downloads 108
1195 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 44
1194 Complementary Effect of Wistleblowing Policy and Internal Control System on Prevention and Detection of Fraud in Nigerian Deposit Money Banks

Authors: Dada Durojaye Joshua

Abstract:

The study examined the combined effect of internal control system and whistle blowing policy while it pursues the following specific objectives, which are to: examine the relationship between monitoring activities and fraud’s detection and prevention; investigate the effect of control activities on fraud’s detection and prevention in Nigerian Deposit Money Banks (DMBs). The population of the study comprises the 89,275 members of staff in the 20 DMBs in Nigeria as at June 2019. Purposive and convenient sampling techniques were used in the selection of the 80 members of staff at the supervisory level of the Internal Audit Departments of the head offices of the sampled banks, that is, selecting 4 respondents (Audit Executive/Head, Internal Control; Manager, Operation Risk Management; Head, Financial Crime Control; the Chief Compliance Officer) from each of the 20 DMBs in Nigeria. A standard questionnaire was adapted from 2017/2018 Internal Control Questionnaire and Assessment, Bureau of Financial Monitoring and Accountability Florida Department of Economic Opportunity. It was modified to serve the purpose for which it was meant to serve. It was self-administered to gather data from the 80 respondents at the respective headquarters of the sampled banks at their respective locations across Nigeria. Two likert-scales was used in achieving the stated objectives. A logit regression was used in analysing the stated hypotheses. It was found that effect of monitoring activities using the construct of conduct of ongoing or separate evaluation (COSE), evaluation and communication of deficiencies (ECD) revealed that monitoring activities is significant and positively related to fraud’s detection and prevention in Nigerian DMBS. So also, it was found that control activities using selection and development of control activities (SDCA), selection and development of general controls over technology to prevent financial fraud (SDGCTF), development of control activities that gives room for transparency through procedures that put policies into actions (DCATPPA) contributed to influence fraud detection and prevention in the Nigerian DMBs. In addition, it was found that transparency, accountability, reliability, independence and value relevance have significant effect on fraud detection and prevention ibn Nigerian DMBs. The study concluded that the board of directors demonstrated independence from management and exercises oversight of the development and performance of internal control. Part of the conclusion was that there was accountability on the part of the owners and preparers of the financial reports and that the system gives room for the members of staff to account for their responsibilities. Among the recommendations was that the management of Nigerian DMBs should create and establish a standard Internal Control System strong enough to deter fraud in order to encourage continuity of operations by ensuring liquidity, solvency and going concern of the banks. It was also recommended that the banks create a structure that encourages whistleblowing to complement the internal control system.

Keywords: internal control, whistleblowing, deposit money banks, fraud prevention, fraud detection

Procedia PDF Downloads 72
1193 The Forms of Representation in Architectural Design Teaching: The Cases of Politecnico Di Milano and Faculty of Architecture of the University of Porto

Authors: Rafael Sousa Santos, Clara Pimena Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

The representative component, a determining aspect of the architect's training, has been marked by an exponential and unprecedented development. However, the multiplication of possibilities has also multiplied uncertainties about architectural design teaching, and by extension, about the very principles of architectural education. In this paper, it is intended to present the results of a research developed on the following problem: the relation between the forms of representation and the architectural design teaching-learning processes. The research had as its object the educational model of two schools – the Politecnico di Milano (POLIMI) and the Faculty of Architecture of the University of Porto (FAUP) – and was led by three main objectives: to characterize the educational model followed in both schools focused on the representative component and its role; to interpret the relation between forms of representation and the architectural design teaching-learning processes; to consider their possibilities of valorisation. Methodologically, the research was conducted according to a qualitative embedded multiple-case study design. The object – i.e., the educational model – was approached in both POLIMI and FAUP cases considering its Context and three embedded unities of analysis: the educational Purposes, Principles, and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is assumed; the architectural design classes, expressing how the model is achieved; and the students, expressing how the model is acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal the importance of the representative component in the educational model of both cases, despite the differences in its role. In POLIMI's model, representation is particularly relevant in the teaching of architectural design, while in FAUP’s model, it plays a transversal role – according to an idea of 'general training through hand drawing'. In fact, the difference between models relative to representation can be partially understood by the level of importance that each gives to hand drawing. Regarding the teaching of architectural design, the two cases are distinguished in the relation with the representative component: while in POLIMI the forms of representation serve essentially an instrumental purpose, in FAUP they tend to be considered also for their methodological dimension. It seems that the possibilities for valuing these models reside precisely in the relation between forms of representation and architectural design teaching. It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance of the educational model of POLIMI and FAUP; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the forms of representation and its relation with architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, educational models, forms of representation

Procedia PDF Downloads 122
1192 Empowering Indigenous Epistemologies in Geothermal Development

Authors: Te Kīpa Kēpa B. Morgan, Oliver W. Mcmillan, Dylan N. Taute, Tumanako N. Fa'aui

Abstract:

Epistemologies are ways of knowing. Indigenous Peoples are aware that they do not perceive and experience the world in the same way as others. So it is important when empowering Indigenous epistemologies, such as that of the New Zealand Māori, to also be able to represent a scientific understanding within the same analysis. A geothermal development assessment tool has been developed by adapting the Mauri Model Decision Making Framework. Mauri is a metric that is capable of representing the change in the life-supporting capacity of things and collections of things. The Mauri Model is a method of grouping mauri indicators as dimension averages in order to allow holistic assessment and also to conduct sensitivity analyses for the effect of worldview bias. R-shiny is the coding platform used for this Vision Mātauranga research which has created an expert decision support tool (DST) that combines a stakeholder assessment of worldview bias with an impact assessment of mauri-based indicators to determine the sustainability of proposed geothermal development. The initial intention was to develop guidelines for quantifying mātauranga Māori impacts related to geothermal resources. To do this, three typical scenarios were considered: a resource owner wishing to assess the potential for new geothermal development; another party wishing to assess the environmental and cultural impacts of the proposed development; an assessment that focuses on the holistic sustainability of the resource, including its surface features. Indicator sets and measurement thresholds were developed that are considered necessary considerations for each assessment context and these have been grouped to represent four mauri dimensions that mirror the four well-being criteria used for resource management in Aotearoa, New Zealand. Two case studies have been conducted to test the DST suitability for quantifying mātauranga Māori and other biophysical factors related to a geothermal system. This involved estimating mauri0meter values for physical features such as temperature, flow rate, frequency, colour, and developing indicators to also quantify qualitative observations about the geothermal system made by Māori. A retrospective analysis has then been conducted to verify different understandings of the geothermal system. The case studies found that the expert DST is useful for geothermal development assessment, especially where hapū (indigenous sub-tribal grouping) are conflicted regarding the benefits and disadvantages of their’ and others’ geothermal developments. These results have been supplemented with evaluations for the cumulative impacts of geothermal developments experienced by different parties using integration techniques applied to the time history curve of the expert DST worldview bias weighted plotted against the mauri0meter score. Cumulative impacts represent the change in resilience or potential of geothermal systems, which directly assists with the holistic interpretation of change from an Indigenous Peoples’ perspective.

Keywords: decision support tool, holistic geothermal assessment, indigenous knowledge, mauri model decision-making framework

Procedia PDF Downloads 187
1191 Preparation, Characterization and Photocatalytic Activity of a New Noble Metal Modified TiO2@SrTiO3 and SrTiO3 Photocatalysts

Authors: Ewelina Grabowska, Martyna Marchelek

Abstract:

Among the various semiconductors, nanosized TiO2 has been widely studied due to its high photosensitivity, low cost, low toxicity, and good chemical and thermal stability. However, there are two main drawbacks to the practical application of pure TiO2 films. One is that TiO2 can be induced only by ultraviolet (UV) light due to its intrinsic wide bandgap (3.2 eV for anatase and 3.0 eV for rutile), which limits its practical efficiency for solar energy utilization since UV light makes up only 4-5% of the solar spectrum. The other is that a high electron-hole recombination rate will reduce the photoelectric conversion efficiency of TiO2. In order to overcome the above drawbacks and modify the electronic structure of TiO2, some semiconductors (eg. CdS, ZnO, PbS, Cu2O, Bi2S3, and CdSe) have been used to prepare coupled TiO2 composites, for improving their charge separation efficiency and extending the photoresponse into the visible region. It has been proved that the fabrication of p-n heterostructures by combining n-type TiO2 with p-type semiconductors is an effective way to improve the photoelectric conversion efficiency of TiO2. SrTiO3 is a good candidate for coupling TiO2 and improving the photocatalytic performance of the photocatalyst because its conduction band edge is more negative than TiO2. Due to the potential differences between the band edges of these two semiconductors, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Conversely, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Then the photogenerated charge carriers can be efficiently separated by these processes, resulting in the enhancement of the photocatalytic property in the photocatalyst. Additionally, one of the methods for improving photocatalyst performance is addition of nanoparticles containing one or two noble metals (Pt, Au, Ag and Pd) deposited on semiconductor surface. The mechanisms were proposed as (1) the surface plasmon resonance of noble metal particles is excited by visible light, facilitating the excitation of the surface electron and interfacial electron transfer (2) some energy levels can be produced in the band gap of TiO2 by the dispersion of noble metal nanoparticles in the TiO2 matrix; (3) noble metal nanoparticles deposited on TiO2 act as electron traps, enhancing the electron–hole separation. In view of this, we recently obtained series of TiO2@SrTiO3 and SrTiO3 photocatalysts loaded with noble metal NPs. using photodeposition method. The M- TiO2@SrTiO3 and M-SrTiO3 photocatalysts (M= Rh, Rt, Pt) were studied for photodegradation of phenol in aqueous phase under UV-Vis and visible irradiation. Moreover, in the second part of our research hydroxyl radical formations were investigated. Fluorescence of irradiated coumarin solution was used as a method of ˙OH radical detection. Coumarin readily reacts with generated hydroxyl radicals forming hydroxycoumarins. Although the major hydroxylation product is 5-hydroxycoumarin, only 7-hydroxyproduct of coumarin hydroxylation emits fluorescent light. Thus, this method was used only for hydroxyl radical detection, but not for determining concentration of hydroxyl radicals.

Keywords: composites TiO2, SrTiO3, photocatalysis, phenol degradation

Procedia PDF Downloads 222
1190 Accumulated Gender-Diverse Co-signing Experience, Knowledge Sharing, and Audit Quality

Authors: Anxuan Xie, Chun-Chan Yu

Abstract:

Survey evidence provides support that auditors can gain professional knowledge not only from client firms but also from teammates they work with. Furthermore, given that knowledge is accumulated in nature, along with the reality that auditors today must work in an environment of increased diversity, whether the attributes of teammates will influence the effects of knowledge sharing and accumulation and ultimately influence an audit partner’s audit quality should be interesting research issues. We test whether the gender of co-signers will moderate the effect of a lead partner’s cooperative experiences on financial restatements. Furthermore, if the answer is “yes”, we further investigate the underlying reasons. We use data from Taiwan because, according to Taiwan’s law, engagement partners, who are basically two certificate public accountants from the same audit firm, are required to disclose (i.e., sign) their names in the audit report of public companies since 1983. Therefore, we can trace each engagement partner’s historic direct cooperative (co-signing) records and get large-sample data. We find that the benefits of knowledge sharing manifest primarily via co-signing audit reports with audit partners of different gender from the lead engagement partners, supporting the argument that in an audit setting, accumulated gender-diverse working relationship is positively associated with knowledge sharing, and therefore improve lead engagements’ audit quality. This study contributes to the extant literature in the following ways. First, we provide evidence that in the auditing setting, the experiences accumulated from cooperating with teammates of a different gender from the lead partner can improve audit quality. Given that most studies find evidence of negative effects of surface-level diversity on team performance, the results of this study support the prior literature that the association between diversity and knowledge sharing actually hinges on the context (e.g., organizational culture, task complexity) and “bridge” (a pre-existing commonality among team members that can smooth the process of diversity toward favorable results) among diversity team members. Second, this study also provides practical insights with respect to the audit firms’ policy of knowledge sharing and deployment of engagement partners. For example, for audit firms that appreciate the merits of knowledge sharing, the deployment of auditors of different gender within an audit team can help auditors accumulate audit-related knowledge, which will further benefit the future performance of those audit firms. Moreover, nowadays, client firms also attach importance to the diversity of their engagement partners. As their policy goals, lawmakers and regulators also continue to promote a gender-diverse working environment. The findings of this study indicate that for audit firms, gender diversity will not be just a means to cater to those groups. Third, for audit committees or other stakeholders, they can evaluate the quality of existing (or potential) lead partners by tracking their co-signing experiences, especially whether they have gender-diverse co-signing experiences.

Keywords: co-signing experiences, audit quality, knowledge sharing, gender diversity

Procedia PDF Downloads 85
1189 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy

Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang

Abstract:

In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.

Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties

Procedia PDF Downloads 155
1188 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments

Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora

Abstract:

Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.

Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver

Procedia PDF Downloads 315
1187 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries

Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni

Abstract:

In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.

Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm

Procedia PDF Downloads 117
1186 Bioleaching of Precious Metals from an Oil-fired Ash Using Organic Acids Produced by Aspergillus niger in Shake Flasks and a Bioreactor

Authors: Payam Rasoulnia, Seyyed Mohammad Mousavi

Abstract:

Heavy fuel oil firing power plants produce huge amounts of ashes as solid wastes, which seriously need to be managed and processed. Recycling precious metals of V and Ni from these oil-fired ashes which are considered as secondary sources of metals recovery, not only has a great economic importance for use in industry, but also it is noteworthy from the environmental point of view. Vanadium is an important metal that is mainly used in the steel industry because of its physical properties of hardness, tensile strength, and fatigue resistance. It is also utilized in oxidation catalysts, titanium–aluminum alloys and vanadium redox batteries. In the present study bioleaching of vanadium and nickel from an oil-fired ash sample was conducted using Aspergillus niger fungus. The experiments were carried out using spent-medium bioleaching method in both Erlenmeyer flasks and also bubble column bioreactor, in order to compare them together. In spent-medium bioleaching the solid waste is not in direct contact with the fungus and consequently the fungal growth is not retarded and maximum organic acids are produced. In this method the metals are leached through biogenic produced organic acids present in the medium. In shake flask experiments the fungus was cultured for 15 days, where the maximum production of organic acids was observed, while in bubble column bioreactor experiments a 7 days fermentation period was applied. The amount of produced organic acids were measured using high performance liquid chromatography (HPLC) and the results showed that depending on the fermentation period and the scale of experiments, the fungus has different major lixiviants. In flask tests, citric acid was the main produced organic acid by the fungus and the other organic acids including gluconic, oxalic, and malic were excreted in much lower concentrations, while in the bioreactor oxalic acid was the main lixiviant and it was produced considerably. In Erlenmeyer flasks during 15 days fermentation of Aspergillus niger, 8080 ppm citric acid and 1170 ppm oxalic acid was produced, while in bubble column bioreactor over 7 days of fungal growth, 17185 ppm oxalic acid and 1040 ppm citric acid was secreted. The leaching tests using the spent-media obtained from both of fermentation experiments, were performed at the same conditions of leaching duration of 7 days, leaching temperature of 60 °C and pulp density up to 3% (w/v). The results revealed that in Erlenmeyer flask experiments 97% of V and 50% of Ni were extracted while using spent medium produced in bubble column bioreactor, V and Ni recoveries were achieved to 100% and 33%, respectively. These recovery yields indicate that in both scales almost total vanadium can be recovered, while nickel recovery was lower. With help of the bioreactor spent-medium nickel recovery yield was lower than that of obtained from the flask experiments, which it could be due to precipitation of some values of Ni in presence of high levels of oxalic acid existing in its spent medium.

Keywords: Aspergillus niger, bubble column bioreactor, oil-fired ash, spent-medium bioleaching

Procedia PDF Downloads 229
1185 Fabrication of Highly Conductive Graphene/ITO Transparent Bi-Film through Chemical Vapor Deposition (CVD) and Organic Additives-Free Sol-Gel Techniques

Authors: Bastian Waduge Naveen Harindu Hemasiri, Jae-Kwan Kim, Ji-Myon Lee

Abstract:

Indium tin oxide (ITO) remains the industrial standard transparent conducting oxides with better performances. Recently, graphene becomes as a strong material with unique properties to replace the ITO. However, graphene/ITO hybrid composite material is a newly born field in the electronic world. In this study, the graphene/ITO composite bi-film was synthesized by a two steps process. 10 wt.% tin-doped, ITO thin films were produced by an environmentally friendly aqueous sol-gel spin coating technique with economical salts of In(NO3)3.H2O and SnCl4 without using organic additives. The wettability and surface free energy (97.6986 mJ/m2) enhanced oxygen plasma treated glass substrates were used to form voids free continuous ITO film. The spin-coated samples were annealed at 600 0C for 1 hour under low vacuum conditions to obtained crystallized, ITO film. The crystal structure and crystalline phases of ITO thin films were analyzed by X-ray diffraction (XRD) technique. The Scherrer equation was used to determine the crystallite size. Detailed information about chemical composition and elemental composition of the ITO film were determined by X-ray photoelectron spectroscopy (XPS) and energy dispersive X-ray spectroscopy (EDX) coupled with FE-SEM respectively. Graphene synthesis was done under chemical vapor deposition (CVD) method by using Cu foil at 1000 0C for 1 min. The quality of the synthesized graphene was characterized by Raman spectroscopy (532nm excitation laser beam) and data was collected at room temperature and normal atmosphere. The surface and cross-sectional observation were done by using FE-SEM. The optical transmission and sheet resistance were measured by UV-Vis spectroscopy and four point probe head at room temperature respectively. Electrical properties were also measured by using V-I characteristics. XRD patterns reveal that the films contain the In2O3 phase only and exhibit the polycrystalline nature of the cubic structure with the main peak of (222) plane. The peak positions of In3d5/2 (444.28 eV) and Sn3d5/2 (486.7 eV) in XPS results indicated that indium and tin are in the oxide form only. The UV-visible transmittance shows 91.35 % at 550 nm with 5.88 x 10-3 Ωcm specific resistance. The G and 2D band in Raman spectroscopy of graphene appear at 1582.52 cm-1 and 2690.54 cm-1 respectively when the synthesized CVD graphene on SiO2/Si. The determined intensity ratios of 2D to G (I2D/IG) and D to G (ID/IG) were 1.531 and 0.108 respectively. However, the above-mentioned G and 2D peaks appear at 1573.57 cm-1 and 2668.14 cm-1 respectively when the CVD graphene on the ITO coated glass, the positions of G and 2D peaks were red shifted by 8.948 cm-1 and 22.396 cm-1 respectively. This graphene/ITO bi-film shows modified electrical properties when compares with sol-gel derived ITO film. The reduction of sheet resistance in the bi-film was 12.03 % from the ITO film. Further, the fabricated graphene/ITO bi-film shows 88.66 % transmittance at 550 nm wavelength.

Keywords: chemical vapor deposition, graphene, ITO, Raman Spectroscopy, sol-gel

Procedia PDF Downloads 260
1184 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches

Authors: Erin Lawlor

Abstract:

In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.

Keywords: radicalisation, deradicalisation, violent extremism, public health

Procedia PDF Downloads 66
1183 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)

Authors: Vinay Kumar Vanjakula, Frank Adam

Abstract:

The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.

Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour

Procedia PDF Downloads 169
1182 Erasmus+ Program in Vocational Education: Effects of European International Mobility in Portuguese Vocational Schools

Authors: José Carlos Bronze, Carlinda Leite, Angélica Monteiro

Abstract:

The creation of the Erasmus Program in 1987 represented a milestone in promoting and funding international mobility in higher education in Europe. Its effects were so significant that they influenced the creation of the European Higher Education Area through the Bologna Process and ensured the program’s continuation and maintenance. Over the last decades, the escalating figures of participants and funds instigated significant scientific studies on the program's effects on higher education. More recently, in 2014, the program was renamed “Erasmus+” when it expanded into other fields of education, namely Vocational Education and Training (VET). Despite being now running in this field of education for a decade (2014-2024), its effects on VET remain less studied and less known, while the higher education field keeps attracting researchers’ attention. Given this gap, it becomes relevant to study the effects of E+ on VET, particularly in the priority domains of the Program: “Inclusion and Diversity,” “Participation in Democratic Life, Common Values and Civic Engagement,” “Environment and Fight Against Climate Change,” and “Digital Transformation.” This latter has been recently emphasized due to the COVID-19 pandemic that forced the so-called emergency remote teaching, leading schools to quickly transform and adapt to a new reality regardless of the preparedness levels of teachers and students. Together with the remaining E+ priorities, they directly relate to an emancipatory perspective of education sustained in soft skills such as critical thinking, intercultural awareness, autonomy, active citizenship, teamwork, and problem-solving, among others. Based on this situation, it is relevant to know the effects of E+ on the VET field, namely questioning how international mobility instigates digitalization processes and supports emancipatory queries therein. As an education field that more directly connects to hard skills and an instrumental approach oriented to the labor market’s needs, a study was conducted to determine the effects of international mobility on developing digital literacy and soft skills in the VET field. In methodological terms, the study used semi-structured interviews with teaching and non-teaching staff from three VET schools who are strongly active in the E+ Program. The interviewees were three headmasters, four mobility project managers, and eight teachers experienced in international mobility. The data was subjected to qualitative content analysis using the NVivo 14 application. The results show that E+ international mobility promotes and facilitates the use of digital technologies as a pedagogical resource at VET schools and enhances and generates students’ soft skills. In conclusion, E+ mobility in the VET field supports adopting the program’s priorities by increasing the teachers’ knowledge and use of digital resources and amplifying and generating participants’ soft skills.

Keywords: Erasmus international mobility, digital literacy, soft skills, vocational education and training

Procedia PDF Downloads 34
1181 The Conflict of Grammaticality and Meaningfulness of the Corrupt Words: A Cross-lingual Sociolinguistic Study

Authors: Jayashree Aanand, Gajjam

Abstract:

The grammatical tradition in Sanskrit literature emphasizes the importance of the correct use of Sanskrit words or linguistic units (sādhu śabda) that brings the meritorious values, denying the attribution of the same religious merit to the incorrect use of Sanskrit words (asādhu śabda) or the vernacular or corrupt forms (apa-śabda or apabhraṁśa), even though they may help in communication. The current research, the culmination of the doctoral research on sentence definition, studies the difference among the comprehension of both correct and incorrect word forms in Sanskrit and Marathi languages in India. Based on the total of 19 experiments (both web-based and classroom-controlled) on approximately 900 Indian readers, it is found that while the incorrect forms in Sanskrit are comprehended with lesser accuracy than the correct word forms, no such difference can be seen for the Marathi language. It is interpreted that the incorrect word forms in the native language or in the language which is spoken daily (such as Marathi) will pose a lesser cognitive load as compared to the language that is not spoken on a daily basis but only used for reading (such as Sanskrit). The theoretical base for the research problem is as follows: among the three main schools of Language Science in ancient India, the Vaiyākaraṇas (Grammarians) hold that the corrupt word forms do have their own expressive power since they convey meaning, while as the Mimāṁsakas (the Exegesists) and the Naiyāyikas (the Logicians) believe that the corrupt forms can only convey the meaning indirectly, by recalling their association and similarity with the correct forms. The grammarians argue that the vernaculars that are born of the speaker’s inability to speak proper Sanskrit are regarded as degenerate versions or fallen forms of the ‘divine’ Sanskrit language and speakers who could not use proper Sanskrit or the standard language were considered as Śiṣṭa (‘elite’). The different ideas of different schools strictly adhere to their textual dispositions. For the last few years, sociolinguists have agreed that no variety of language is inherently better than any other; they are all the same as long as they serve the need of people that use them. Although the standard form of a language may offer the speakers some advantages, the non-standard variety is considered the most natural style of speaking. This is visible in the results. If the incorrect word forms incur the recall of the correct word forms in the reader as the theory suggests, it would have added one extra step in the process of sentential cognition leading to more cognitive load and less accuracy. This has not been the case for the Marathi language. Although speaking and listening to the vernaculars is the common practice and reading the vernacular is not, Marathi readers have readily and accurately comprehended the incorrect word forms in the sentences, as against the Sanskrit readers. The primary reason being Sanskrit is spoken and also read in the standard form only and the vernacular forms in Sanskrit are not found in the conversational data.

Keywords: experimental sociolinguistics, grammaticality and meaningfulness, Marathi, Sanskrit

Procedia PDF Downloads 126
1180 Attachment Theory and Quality of Life: Grief Education and Training

Authors: Jane E. Hill

Abstract:

Quality of life is an important component for many. With that in mind, everyone will experience some type of loss within his or her lifetime. A person can experience loss due to break up, separation, divorce, estrangement, or death. An individual may experience loss of a job, loss of capacity, or loss caused by human or natural-caused disasters. An individual’s response to such a loss is unique to them, and not everyone will seek services to assist them with their grief due to loss. Counseling can promote positive outcomes for clients that are grieving by addressing the client’s personal loss and helping the client process their grief. However, a lack of understanding on the part of counselors of how people grieve may result in negative client outcomes such as poor health, psychological distress, or an increased risk of depression. Education and training in grief counseling can improve counselors’ problem recognition and skills in treatment planning. The purpose of this study was to examine whether the Council for Accreditation of Counseling and Related Educational Programs (CACREP) master’s degree counseling students view themselves as having been adequately trained in grief theories and skills. Many people deal with grief issues that prevent them from having joy or purpose in their lives and that leaves them unable to engage in positive opportunities or relationships. This study examined CACREP-accredited master’s counseling students’ self-reported competency, training, and education in providing grief counseling. The implications for positive social change arising from the research may be to incorporate and promote education and training in grief theories and skills in a majority of counseling programs and to provide motivation to incorporate professional standards for grief training and practice in the mental health counseling field. The theoretical foundation used was modern grief theory based on John Bowlby’s work on Attachment Theory. The overall research question was how competent do master’s-level counselors view themselves regarding the education or training they received in grief theories or counseling skills in their CACREP-accredited studies. The author used a non-experimental, one shot survey comparative quantitative research design. Cicchetti’s Grief Counseling Competency Scale (GCCS) was administered to CACREP master’s-level counseling students enrolled in their practicum or internship experience, which resulted in 153 participants. Using a MANCOVA, there was significance found for relationships between coursework taken and (a) perceived assessment skills (p = .029), (b) perceived treatment skills (p = .025), and (c) perceived conceptual skills and knowledge (p = .003). Results of this study provided insight for CACREP master’s-level counseling programs to explore and discuss curriculum coursework inclusion of education and training in grief theories and skills.

Keywords: counselor education and training, grief education and training, grief and loss, quality of life

Procedia PDF Downloads 191
1179 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull

Authors: Elsher Lawson-Boyd

Abstract:

In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.

Keywords: genetics, mental illness, neuroepigenetics, trauma

Procedia PDF Downloads 125