Search results for: bulk barrier
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1384

Search results for: bulk barrier

64 Explosive Clad Metals for Geothermal Energy Recovery

Authors: Heather Mroz

Abstract:

Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.

Keywords: clad metal, explosion welding, separator material, well casing material, piping material

Procedia PDF Downloads 140
63 Increase in the Shelf Life Anchovy (Engraulis ringens) from Flaying then Bleeding in a Sodium Citrate Solution

Authors: Santos Maza, Enzo Aldoradin, Carlos Pariona, Eliud Arpi, Maria Rosales

Abstract:

The objective of this study was to investigate the effect of flaying then bleeding anchovy (Engraulis ringens) immersed within a sodium citrate solution. Anchovy is a pelagic fish that readily deteriorates due to its high content of polyunsaturated fatty acids. As such, within the Peruvian food industry, the shelf life of frozen anchovy is explicitly 6 months, this short duration imparts a barrier to use for direct consumption human. Thus, almost all capture of anchovy by the fishing industry is eventually used in the production of fishmeal. We offer this an alternative to its typical production process in order to increase shelf life. In the present study, 100 kg of anchovies were captured and immediately mixed with ice on ship, maintaining a high quality sensory metric (e.g., with color blue in back) while still arriving for processing less than 2 h after capture. Anchovies with fat content of 3% were immediately flayed (i.e., reducing subcutaneous fat), beheaded, gutted and bled (i.e., removing hemoglobin) by immersion in water (Control) or in a solution of 2.5% sodium citrate (treatment), then subsequently frozen at -30 °C for 8 h in 2 kg batches. Subsequent glazing and storage at -25 °C for 14 months completed the experiments parameters. The peroxide value (PV), acidity (A), fatty acid profile (FAP), thiobarbituric acid reactive substances (TBARS), heme iron (HI), pH and sensory attributes of the samples were evaluated monthly. The results of the PV, TBARS, A, pH and sensory analyses displayed significant differences (p<0.05) between treatment and control sample; where the sodium citrate treated samples showed increased preservation features. Specifically, at the beginning of the study, flayed, beheaded, gutted and bled anchovies displayed low content of fat (1.5%) with moderate amount of PV, A and TBARS, and were not rejected by sensory analysis. HI values and FAP displayed varying behavior, however, results of HI did not reveal a decreasing trend. This result is indicative of the fact that levels of iron were maintained as HI and did not convert into no heme iron, which is known to be the primary catalyst of lipid oxidation in fish. According to the FAP results, the major quantity of fatty acid was of polyunsaturated fatty acid (PFA) followed by saturated fatty acid (SFA) and then monounsaturated fatty acid (MFA). According to sensory analysis, the shelf life of flayed, beheaded and gutted anchovy (control and treatment) was 14 months. This shelf life was reached at laboratory level because high quality anchovies were used and immediately flayed, beheaded, gutted, bled and frozen. Therefore, it is possible to maintain the shelf life of anchovies for a long time. Overall, this method displayed a large increase in shelf life relative to that commonly seen for anchovies in this industry. However, these results should be extrapolated at industrial scales to propose better processing conditions and improve the quality of anchovy for direct human consumption.

Keywords: citrate sodium solution, heme iron, polyunsaturated fatty acids, shelf life of frozen anchovy

Procedia PDF Downloads 271
62 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 157
61 Indigenous Children Doing Better through Mother Tongue Based Early Childhood Care and Development Center in Chittagong Hill Tracts, Bangladesh

Authors: Meherun Nahar

Abstract:

Background:The Chittagong Hill Tracts (CHT) is one of the most diverse regions in Bangladesh in terms of geography, ethnicity, culture and traditions of the people and home of thirteen indigenous ethnic people. In Bangladesh indigenous children aged 6-10 years remain out of school, and the majority of those who do enroll drop out before completing primary school. According to different study that the dropout rate of indigenous children is much higher than the estimated national rate, children dropping out especially in the early years of primary school. One of the most critical barriers for these children is that they do not understand the national language in the government pre-primary school. And also their school readiness and development become slower. In this situation, indigenous children excluded from the mainstream quality education. To address this issue Save the children in Bangladesh and other organizations are implementing community-based Mother Tongue-Based Multilingual Education program (MTBMLE) in the Chittagong Hill Tracts (CHT) for improving the enrolment rate in Government Primary Schools (GPS) reducing dropout rate as well as quality education. In connection with that Save the children conducted comparative research in Chittagong hill tracts on children readiness through Mother tongue-based and Non-mother tongue ECCD center. Objectives of the Study To assess Mother Language based ECCD centers and Non-Mother language based ECCD centers children’s school readiness and development. To assess the community perception over Mother Language based and Non-Mother Language based ECCD center. Methodology: The methodology of the study was FGD, KII, In-depth Interview and observation. Both qualitative and quantitative research methods were followed. The quantitative part has three components, School Readiness, Classroom observation and Headteacher interview and qualitative part followed FGD technique. Findings: The interviews with children under school readiness component showed that in general, Mother Language (ML) based ECCD children doing noticeably better in all four areas (Knowledge, numeracy, fine motor skill and communication) than their peers from Non-mother language based children. ML students seem to be far better skilled in concepts about print as most of them could identify cover and title of the book that was shown to them. They could also know from where to begin to read the book or could correctly point the letter that was read. A big difference was found in the area of identifying letters as 89.3% ML students of could identify letters correctly whereas for Non mother language 30% could do the same. The class room observation data shows that ML children are more active and remained engaged in the classroom than NML students. Also, teachers of ML appeared to have more engaged in explaining issues relating to general knowledge or leading children in rhyming/singing other than telling something from text books. The participants of FGDs were very enthusiastic on using mother language as medium of teaching in pre-schools. They opined that this initiative elates children to attend school and enables them to continue primary schooling without facing any language barrier.

Keywords: Chittagong hill tracts, early childhood care and development (ECCD), indigenous, mother language

Procedia PDF Downloads 104
60 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 57
59 Reinventing Business Education: Filling the Knowledge Gap on the Verge of the 4th Industrial Revolution

Authors: Elena Perepelova

Abstract:

As the world approaches the 4th industrial revolution, income inequality has become one of the major societal concerns. Displacement of workers by technology becomes a reality, and in return, new skills and competencies are required. More important than ever, education needs to help individuals understand the wider world around them and make global connections. The author argues for the necessity to incorporate business, economics and finance studies as a part of primary education and offer access to business education to the general population with the primary objective to understand how the world functions. The paper offers a fresh look at existing business theory through an innovative program called 'Usefulnomics'. Realizing that the subject of Economics, Finance and Business are perceived as overwhelming for a large part of the population, the author has taken a holistic approach and created a program that simplifies the definitions of the existing concepts and shifts from the traditional breakdown into subjects and specialties to a teaching method that is based exclusively on real-life example case studies and group debates, in order to better grasp the concepts and put them into context. The paper findings are the result of a two-year project and experimental work with students from UK, USA, Malaysia, Russia, and Spain. The author conducted extensive research through on-line and in-person classes and workshops as well as in-depth interviews of primary and secondary grade students to assess their understanding of what is a business, how businesses operate and the role businesses play in their communities. The findings clearly indicate that students of all ages often understood business concepts and processes only in an intuitive way, which resulted in misconceptions and gaps in knowledge. While knowledge gaps were easier to identify and correct in primary school students, as students’ age increased, the learning process became distorted by career choices, political views, and the students’ actual (or perceived) economic status. While secondary school students recognized more concepts, their real understanding was often on par with upper primary school age students. The research has also shown that lack of correct vocabulary created a strong barrier to communication and real-life application or further learning. Based on these findings, each key business concept was practiced and put into context with small groups of students in order to design the content and format which would be well accepted and understood by the target group. As a result, the final learning program package was based on case studies from daily modern life and used a wide range of examples: from popular brands and well-known companies to basic commodities. In the final stage, the content and format were put into practice in larger classrooms. The author would like to share the key findings from the research, the resulting learning program as well as present new ideas on how the program could be further enriched and adapted so schools and organizations can deliver it.

Keywords: business, finance, economics, lifelong learning, XXI century skills

Procedia PDF Downloads 101
58 Development and application of Humidity-Responsive Controlled Release Active Packaging Based on Electrospinning Nanofibers and In Situ Growth Polymeric Film in Food preservation

Authors: Jin Yue

Abstract:

Fresh produces especially fruits, vegetables, meats and aquatic products have limited shelf life and are highly susceptible to deterioration. Essential oils (EOs) extracted from plants have excellent antioxidant and broad-spectrum antibacterial activities, and they can play as natural food preservatives. But EOs are volatile, water insoluble, pungent, and easily decomposing under light and heat. Many approaches have been developed to improve the solubility and stability of EOs such as polymeric film, coating, nanoparticles, nano-emulsions and nanofibers. Construction of active packaging film which can incorporate EOs with high loading efficiency and controlled release of EOs has received great attention. It is still difficult to achieve accurate release of antibacterial compounds at specific target locations in active packaging. In this research, a relative humidity-responsive packaging material was designed, employing the electrospinning technique to fabricate a nanofibrous film loaded with a 4-terpineol/β-cyclodextrin inclusion complexes (4-TA/β-CD ICs). Functioning as an innovative food packaging material, the film demonstrated commendable attributes including pleasing appearance, thermal stability, mechanical properties, and effective barrier properties. The incorporation of inclusion complexes greatly enhanced the antioxidant and antibacterial activity of the film, particularly against Shewanella putrefaciens, with an inhibitory efficiency of up to 65%. Crucially, the film realized controlled release of 4-TA under 98% high relative humidity conditions by inducing the plasticization of polymers caused by water molecules, swelling of polymer chains, and destruction of hydrogen bonds within the cyclodextrin inclusion complex. This film with a long-term antimicrobial effect successfully extended the shelf life of Litopenaeus vannamei shrimp to 7 days at 4 °C. To further improve the loading efficiency and long-acting release of EOs, we synthesized the γ-cyclodextrin-metal organic frameworks (γ-CD-MOFs), and then efficiently anchored γ-CD-MOFs on chitosan-cellulose (CS-CEL) composite film by in situ growth method for controlled releasing of carvacrol (CAR). We found that the growth efficiency of γ-CD-MOFs was the highest when the concentration of CEL dispersion was 5%. The anchoring of γ-CD-MOFs on CS-CEL film significantly improved the surface area of CS-CEL film from 1.0294 m2/g to 43.3458 m2/g. The molecular docking and 1H NMR spectra indicated that γ-CD-MOF has better complexing and stabilizing ability for CAR molecules than γ-CD. In addition, the release of CAR reached 99.71±0.22% on the 10th day, while under 22% RH, the release pattern of CAR was a plateau with 14.71 ± 4.46%. The inhibition rate of this film against E. coli, S. aureus and B. cinerea was more than 99%, and extended the shelf life of strawberries to 7 days. By incorporating the merits of natural biopolymers and MOFs, this active packaging offers great potential as a substitute for traditional packaging materials.

Keywords: active packaging, antibacterial activity, controlled release, essential oils, food quality control

Procedia PDF Downloads 41
57 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 141
56 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 149
55 Transport of Reactive Carbo-Iron Composite Particles for in situ Groundwater Remediation Investigated at Laboratory and Field Scale

Authors: Sascha E. Oswald, Jan Busch

Abstract:

The in-situ dechlorination of contamination by chlorinated solvents in groundwater via zero-valent iron (nZVI) is potentially an efficient and prompt remediation method. A key requirement is that nZVI has to be introduced in the subsurface in a way that substantial quantities of the contaminants are actually brought into direct contact with the nZVI in the aquifer. Thus it could be a more flexible and precise alternative to permeable reactive barrier techniques using granular iron. However, nZVI are often limited by fast agglomeration and sedimentation in colloidal suspensions, even more so in the aquifer sediments, which is a handicap for the application to treat source zones or contaminant plumes. Colloid-supported nZVI show promising characteristics to overcome these limitations and Carbo-Iron Colloids is a newly developed composite material aiming for that. The nZVI is built onto finely ground activated carbon of about a micrometer diameter acting as a carrier for it. The Carbo-Iron Colloids are often suspended with a polyanionic stabilizer, and carboxymethyl cellulose is one with good properties for that. We have investigated the transport behavior of Carbo-Iron Colloids (CIC) on different scales and for different conditions to assess its mobility in aquifer sediments as a key property for making its application feasible. The transport properties were tested in one-dimensional laboratory columns, a two-dimensional model aquifer and also an injection experiment in the field. Those experiments were accompanied by non-invasive tomographic investigations of the transport and filtration processes of CIC suspensions. The laboratory experiments showed that a larger part of the CIC can travel at least scales of meters for favorable but realistic conditions. Partly this is even similar to a dissolved tracer. For less favorable conditions this can be much smaller and in all cases a particular fraction of the CIC injected is retained mainly shortly after entering the porous medium. As field experiment a horizontal flow field was established, between two wells with a distance of 5 meters, in a confined, shallow aquifer at a contaminated site in North German lowlands. First a tracer test was performed and a basic model was set up to define the design of the CIC injection experiment. Then CIC suspension was introduced into the aquifer at the injection well while the second well was pumped and samples taken there to observe the breakthrough of CIC. This was based on direct visual inspection and total particle and iron concentrations of water samples analyzed in the laboratory later. It could be concluded that at least 12% of the CIC amount injected reached the extraction well in due course, some of it traveling distances larger than 10 meters in the non-uniform dipole flow field. This demonstrated that these CIC particles have a substantial mobility for reaching larger volumes of a contaminated aquifer and for interacting there by their reactivity with dissolved contaminants in the pore space. Therefore they seem suited well for groundwater remediation by in-situ formation of reactive barriers for chlorinated solvent plumes or even source removal.

Keywords: carbo-iron colloids, chlorinated solvents, in-situ remediation, particle transport, plume treatment

Procedia PDF Downloads 228
54 Biofuels from Hybrid Poplar: Using Biochemicals and Wastewater Treatment as Opportunities for Early Adoption

Authors: Kevin W. Zobrist, Patricia A. Townsend, Nora M. Haider

Abstract:

Advanced Hardwood Biofuels Northwest (AHB) is a consortium funded by the United States Department of Agriculture (USDA) to research the potential for a system to produce advanced biofuels (jet fuel, diesel, and gasoline) from hybrid poplar in the Pacific Northwest region of the U.S. An Extension team was established as part of the project to examine community readiness and willingness to adopt hybrid as a purpose-grown bioenergy crop. The Extension team surveyed key stakeholder groups, including growers, Extension professionals, policy makers, and environmental groups, to examine attitudes and concerns about growing hybrid poplar for biofuels. The surveys found broad skepticism about the viability of such a system. The top concern for most stakeholder groups was economic viability and the availability of predictable markets. Growers had additional concerns stemming from negative past experience with hybrid poplar as an unprofitable endeavor for pulp and paper production. Additional barriers identified included overall land availability and the availability of water and water rights for irrigation in dry areas of the region. Since the beginning of the project, oil and natural gas prices have plummeted due to rapid increases in domestic production. This has exacerbated the problem with economic viability by making biofuels even less competitive than fossil fuels. However, the AHB project has identified intermediate market opportunities to use poplar as a renewable source for other biochemicals produced by petroleum refineries, such as acetic acid, ethyl acetate, ethanol, and ethylene. These chemicals can be produced at a lower cost with higher yields and higher, more-stable prices. Despite these promising market opportunities, the survey results suggest that it will still be challenging to induce growers to adopt hybrid poplar. Early adopters will be needed to establish an initial feedstock supply for a budding industry. Through demonstration sites and outreach events to various stakeholder groups, the project attracted interest from wastewater treatment facilities, since these facilities are already growing hybrid poplar plantations for applying biosolids and treated wastewater for further purification, clarification, and nutrient control through hybrid poplar’s phytoremediation capabilities. Since these facilities are already using hybrid poplar, selling the wood as feedstock for a biorefinery would be an added bonus rather than something requiring a high rate of return to compete with other crops and land uses. By holding regional workshops and conferences with wastewater professionals, AHB Extension has found strong interest from wastewater treatment operators. In conclusion, there are several significant barriers to developing a successful system for producing biofuels from hybrid poplar, with the largest barrier being economic viability. However, there is potential for wastewater treatment facilities to serve as early adopters for hybrid poplar production for intermediate biochemicals and eventually biofuels.

Keywords: hybrid poplar, biofuels, biochemicals, wastewater treatment

Procedia PDF Downloads 250
53 Stability of Porous SiC Based Materials under Relevant Conditions of Radiation and Temperature

Authors: Marta Malo, Carlota Soto, Carmen García-Rosales, Teresa Hernández

Abstract:

SiC based composites are candidates for possible use as structural and functional materials in the future fusion reactors, the main role is intended for the blanket modules. In the blanket, the neutrons produced in the fusion reaction slow down and their energy is transformed into heat in order to finally generate electrical power. In the blanket design named Dual Coolant Lead Lithium (DCLL), a PbLi alloy for power conversion and tritium breeding circulates inside hollow channels called Flow Channel Inserts (FCIs). These FCI must protect the steel structures against the highly corrosive PbLi liquid and the high temperatures, but also provide electrical insulation in order to minimize magnetohydrodynamic interactions of the flowing liquid metal with the high magnetic field present in a magnetically confined fusion environment. Due to their nominally high temperature and radiation stability as well as corrosion resistance, SiC is the main choice for the flow channel inserts. The significantly lower manufacturing cost presents porous SiC (dense coating is required in order to assure protection against corrosion and as a tritium barrier) as a firm alternative to SiC/SiC composites for this purpose. This application requires the materials to be exposed to high radiation levels and extreme temperatures, conditions for which previous studies have shown noticeable changes in both the microstructure and the electrical properties of different types of silicon carbide. Both initial properties and radiation/temperature induced damage strongly depend on the crystal structure, polytype, impurities/additives that are determined by the fabrication process, so the development of a suitable material requires full control of these variables. For this work, several SiC samples with different percentage of porosity and sintering additives have been manufactured by the so-called sacrificial template method at the Ceit-IK4 Technology Center (San Sebastián, Spain), and characterized at Ciemat (Madrid, Spain). Electrical conductivity was measured as a function of temperature before and after irradiation with 1.8 MeV electrons in the Ciemat HVEC Van de Graaff accelerator up to 140 MGy (~ 2·10 -5 dpa). Radiation-induced conductivity (RIC) was also examined during irradiation at 550 ºC for different dose rates (from 0.5 to 5 kGy/s). Although no significant RIC was found in general for any of the samples, electrical conductivity increase with irradiation dose was observed to occur for some compositions with a linear tendency. However, first results indicate enhanced radiation resistance for coated samples. Preliminary thermogravimetric tests of selected samples, together with posterior XRD analysis allowed interpret radiation-induced modification of the electrical conductivity in terms of changes in the SiC crystalline structure. Further analysis is needed in order to confirm this.

Keywords: DCLL blanket, electrical conductivity, flow channel insert, porous SiC, radiation damage, thermal stability

Procedia PDF Downloads 184
52 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)

Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram

Abstract:

Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.

Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition

Procedia PDF Downloads 226
51 The Role of Group Interaction and Managers’ Risk-willingness for Business Model Innovation Decisions: A Thematic Analysis

Authors: Sarah Müller-Sägebrecht

Abstract:

Today’s volatile environment challenges executives to make the right strategic decisions to gain sustainable success. Entrepreneurship scholars postulate mainly positive effects of environmental changes on entrepreneurship behavior, such as developing new business opportunities, promoting ingenuity, and the satisfaction of resource voids. A strategic solution approach to overcome threatening environmental changes and catch new business opportunities is business model innovation (BMI). Although this research stream has gained further importance in the last decade, BMI research is still insufficient. Especially BMI barriers, such as inefficient strategic decision-making processes, need to be identified. Strategic decisions strongly impact organizational future and are, therefore, usually made in groups. Although groups draw on a more extensive information base than single individuals, group-interaction effects can influence the decision-making process - in a favorable but also unfavorable way. Decisions are characterized by uncertainty and risk, whereby their intensity is perceived individually differently. The individual risk-willingness influences which option humans choose. The special nature of strategic decisions, such as in BMI processes, is that these decisions are not made individually but in groups due to their high organizational scope. These groups consist of different personalities whose individual risk-willingness can vary considerably. It is known from group decision theory that these individuals influence each other, observable in different group-interaction effects. The following research questions arise: i) How does group interaction shape BMI decision-making from managers’ perspective? ii) What are the potential interrelations among managers’ risk-willingness, group biases, and BMI decision-making? After conducting 26 in-depth interviews with executives from the manufacturing industry, applied Gioia methodology reveals the following results: i) Risk-averse decision-makers have an increased need to be guided by facts. The more information available to them, the lower they perceive uncertainty and the more willing they are to pursue a specific decision option. However, the results also show that social interaction does not change the individual risk-willingness in the decision-making process. ii) Generally, it could be observed that during BMI decisions, group interaction is primarily beneficial to increase the group’s information base for making good decisions, less than for social interaction. Further, decision-makers mainly focus on information available to all decision-makers in the team but less on personal knowledge. This work contributes to strategic decision-making literature twofold. First, it gives insights into how group-interaction effects influence an organization’s strategic BMI decision-making. Second, it enriches risk-management research by highlighting how individual risk-willingness impacts organizational strategic decision-making. To date, it was known in BMI research that risk aversion would be an internal BMI barrier. However, with this study, it becomes clear that it is not risk aversion that inhibits BMI. Instead, the lack of information prevents risk-averse decision-makers from choosing a riskier option. Simultaneously, results show that risk-averse decision-makers are not easily carried away by the higher risk-willingness of their team members. Instead, they use social interaction to gather missing information. Therefore, executives need to provide sufficient information to all decision-makers to catch promising business opportunities.

Keywords: business model innovation, cognitive biases, group-interaction effects, strategic decision-making, risk-willingness

Procedia PDF Downloads 59
50 Wheat Cluster Farming Approach: Challenges and Prospects for Smallholder Farmers in Ethiopia

Authors: Hanna Mamo Ergando

Abstract:

Climate change is already having a severe influence on agriculture, affecting crop yields, the nutritional content of main grains, and livestock productivity. Significant adaptation investments will be necessary to sustain existing yields and enhance production and food quality to fulfill demand. Climate-smart agriculture (CSA) provides numerous potentials in this regard, combining a focus on enhancing agricultural output and incomes while also strengthening resilience and responding to climate change. To improve agriculture production and productivity, the Ethiopian government has adopted and implemented a series of strategies, including the recent agricultural cluster farming that is practiced as an effort to change, improve, and transform subsistence farming to modern, productive, market-oriented, and climate-smart approach through farmers production cluster. Besides, greater attention and focus have been given to wheat production and productivity by the government, and wheat is the major crop grown in cluster farming. Therefore, the objective of this assessment was to examine various opportunities and challenges farmers face in a cluster farming system. A qualitative research approach was used to generate primary and secondary data. Respondents were chosen using the purposeful sampling technique. Accordingly, experts from the Federal Ministry of Agriculture, the Ethiopian Agricultural Transformation Institute, the Ethiopian Agricultural Research Institute, and the Ethiopian Environment Protection Authority were interviewed. The assessment result revealed that farming in clusters is an economically viable technique for sustaining small, resource-limited, and socially disadvantaged farmers' agricultural businesses. The method assists farmers in consolidating their products and delivering them in bulk to save on transportation costs while increasing income. Smallholders' negotiating power has improved as a result of cluster membership, as has knowledge and information spillover. The key challenges, on the other hand, were identified as a lack of timely provision of modern inputs, insufficient access to credit services, conflict of interest in crop selection, and a lack of output market for agro-processing firms. Furthermore, farmers in the cluster farming approach grow wheat year after year without crop rotation or diversification techniques. Mono-cropping has disadvantages because it raises the likelihood of disease and insect outbreaks. This practice may result in long-term consequences, including soil degradation, reduced biodiversity, and economic risk for farmers. Therefore, the government must devote more resources to addressing the issue of environmental sustainability. Farmers' access to complementary services that promote production and marketing efficiencies through infrastructure and institutional services has to be improved. In general, the assessment begins with some hint that leads to a deeper study into the efficiency of the strategy implementation, upholding existing policy, and scaling up good practices in a sustainable and environmentally viable manner.

Keywords: cluster farming, smallholder farmers, wheat, challenges, opportunities

Procedia PDF Downloads 161
49 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 254
48 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials

Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita

Abstract:

Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.

Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides

Procedia PDF Downloads 204
47 Ragging and Sludging Measurement in Membrane Bioreactors

Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd

Abstract:

Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.

Keywords: clogging, membrane bioreactors, ragging, sludge

Procedia PDF Downloads 157
46 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships

Authors: Vijaya Dixit Aasheesh Dixit

Abstract:

Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.

Keywords: learning curve, materials management, shipbuilding, sister ships

Procedia PDF Downloads 482
45 Chemical, Biochemical and Sensory Evaluation of a Quadrimix Complementary Food Developed from Sorghum, Groundnut, Crayfish and Pawpaw Blends

Authors: Ogechi Nzeagwu, Assumpta Osuagwu, Charlse Nkwoala

Abstract:

Malnutrition in infants due to poverty, poor feeding practices, and high cost of commercial complementary foods among others is a concern in developing countries. The study evaluated the proximate, vitamin and mineral compositions, antinutrients and functional properties, biochemical, haematological and sensory evaluation of complementary food made from sorghum, groundnut, crayfish and paw-paw flour blends using standard procedures. The blends were formulated on protein requirement of infants (18 g/day) using Nutrisurvey linear programming software in ratio of sorghum(S), groundnut(G), crayfish(C) and pawpaw(P) flours as 50:25:10:15(SGCP1), 60:20:10:10 (SGCP2), 60:15:15:10 (SGCP3) and 60:10:20:10 (SGCP4). Plain-pap (fermented maize flour)(TCF) and cerelac (commercial complementary food) served as basal and control diets. Thirty weanling male albino rats aged 28-35 days weighing 33-60 g were purchased and used for the study. The rats after acclimatization were fed with gruel produced with the experimental diets and the control with water ad libitum daily for 35days. Effect of the blends on lipid profile, blood glucose, haematological (RBC, HB, PCV, MCV), liver and kidney function and weight gain of the rats were assessed. Acceptability of the gruel was conducted at the end of rat feeding on forty mothers of infants’ ≥ 6 months who gave their informed consent to participate using a 9 point hedonic scale. Data was analyzed for means and standard deviation, analysis of variance and means were separated using Duncan multiple range test and significance judged at 0.05, all using SPSS version 22.0. The results indicated that crude protein, fibre, ash and carbohydrate of the formulated diets were either comparable or higher than values in cerelac. The formulated diets (SGCP1- SGCP4) were significantly (P>0.05) higher in vitamin A and thiamin compared to cerelac. The iron content of the formulated diets SGCP1- SGCP4 (4.23-6.36 mg/100) were within the recommended iron intake of infants (0.55 mg/day). Phytate (1.56-2.55 mg/100g) and oxalate (0.23-0.35 mg/100g) contents of the formulated diets were within the permissible limits of 0-5%. In functional properties, bulk density, swelling index, % dispersibility and water absorption capacity significantly (P<0.05) increased and compared favourably with cerelac. The essential amino acids of the formulated blends were within the amino acid profile of the FAO/WHO/UNU reference protein for children 0.5 -2 years of age. Urea concentration of rats fed with SGCP1-SGCP4 (19.48 mmol/L),(23.76 mmol/L),(24.07 mmol/L),(23.65 mmol/L) respectively was significantly higher than that of rat fed cerelac (16.98 mmol/L); however, plain pap had the least value (9.15 mmol/L). Rats fed with SGCP1-SGCP4 (116 mg/dl), (119 mg/dl), (115 mg/dl), (117 mg/dl) respectively had significantly higher glucose levels those fed with cerelac (108 mg/dl). Liver function parameters (AST, ALP and ALT), lipid profile (triglyceride, HDL, LDL, VLDL) and hematological parameters of rats fed with formulated diets were within normal range. Rats fed SGCP1 gained more weight (90.45 g) than other rats fed with SGCP2-SGCP4 (71.65 g, 79.76 g, 75.68 g), TCF (20.13 g) and cerelac (59.06 g). In all the sensory attributes, the control was preferred with respect to the formulated diets. The formulated diets were generally adequate and may likely have potentials to meet nutrient requirements of infants as complementary food.

Keywords: biochemical, chemical evaluation, complementary food, quadrimix

Procedia PDF Downloads 146
44 Growth Mechanism and Sensing Behaviour of Sn Doped ZnO Nanoprisms Prepared by Thermal Evaporation Technique

Authors: Sudip Kumar Sinha, Saptarshi Ghosh

Abstract:

While there’s a perpetual buzz around zinc oxide (ZnO) superstructures for their unique optical features, the versatile material has been constantly utilized to manifest tailored electronic properties through rendition of distinct morphologies. And yet, the unorthodox approach of implementing the novel 1D nanostructures of ZnO (pristine or doped) for volatile sensing applications has ample scope to accommodate new unconventional morphologies. In the last two decades, solid-state sensors have attracted much curiosity for their relevance in identifying pollutant, toxic and other industrial gases. In particular gas sensors based on metal oxide semiconducting (wide Eg) nanomaterials have recently attracted intensive attention owing to their high sensitivity and fast response and recovery time. These materials when exposed to air, the atmospheric O2 dissociates and get absorb on the surface of the sensors by trapping the outermost shell electrons. Finally a depleted zone on the surface of the sensors is formed, that enhances the potential barrier height at grain boundary . Once a target gas is exposed to the sensor, the chemical interaction between the chemisorbed oxygen and the specific gas liberates the trapped electrons. Therefore altering the amount of adsorbate is a considerable approach to improve the sensitivity of any target gas/vapour molecule. Likewise, this study presents a spontaneous but self catalytic creation of Sn-doped ZnO hexagonal nanoprisms on Si (100) substrates through thermal evaporation-condensation method, and their subsequent deployment for volatile sensing. In particular, the sensors were utilized to detect molecules of ethanol, acetone and ammonia below their permissible exposure limits which returned sensitivities of around 85%, 80% and 50% respectively. The influence of Sn concentration on the growth, microstructural and optical properties of the nanoprisms along with its role in augmenting the sensing parameters has been detailed. The single-crystalline nanostructures have a typical diameter ranging from 300 to 500 nm and a length that extends up to few micrometers. HRTEM images confirmed the hexagonal crystallography for the nanoprisms, while SAED pattern asserted the single crystalline nature. The growth habit is along the low index <0001>directions. It has been seen that the growth mechanism of the as-deposited nanostructures are directly influenced by varying supersaturation ratio, fairly high substrate temperatures, and specified surface defects in certain crystallographic planes, all acting cooperatively decide the final product morphology. Room temperature photoluminescence (PL) spectra of this rod like structures exhibits a weak ultraviolet (UV) emission peak at around 380 nm and a broad green emission peak in the 505 nm regime. An estimate of the sensing parameters against dispensed target molecules highlighted the potential for the nanoprisms as an effective volatile sensing material. The Sn-doped ZnO nanostructures with unique prismatic morphology may find important applications in various chemical sensors as well as other potential nanodevices.

Keywords: gas sensor, HRTEM, photoluminescence, ultraviolet, zinc oxide

Procedia PDF Downloads 221
43 Owning (up to) the 'Art of the Insane': Re-Claiming Personhood through Copyright Law

Authors: Mathilde Pavis

Abstract:

From Schumann to Van Gogh, Frida Kahlo, and Ray Charles, the stories narrating the careers of artists with physical or mental disabilities are becoming increasingly popular. From the emergence of ‘pathography’ at the end of 18th century to cinematographic portrayals, the work and lives of differently-abled creative individuals continue to fascinate readers, spectators and researchers. The achievements of those artists form the tip of the iceberg composed of complex politico-cultural movements which continue to advocate for wider recognition of disabled artists’ contribution to western culture. This paper envisages copyright law as a potential tool to such end. It investigates the array of rights available to artists with intellectual disabilities to assert their position as authors of their artwork in the twenty-first-century looking at international and national copyright laws (UK and US). Put simply, this paper questions whether an artist’s intellectual disability could be a barrier to assert their intellectual property rights over their creation. From a legal perspective, basic principles of non-discrimination would contradict the representation of artists’ disability as an obstacle to authorship as granted by intellectual property laws. Yet empirical studies reveal that artists with intellectual disabilities are often denied the opportunity to exercise their intellectual property rights or any form of agency over their work. In practice, it appears that, unlike other non-disabled artists, the prospect for differently-abled creators to make use of their right is contingent to the context in which the creative process takes place. Often will the management of such rights rest with the institution, art therapist or mediator involved in the artists’ work as the latter will have necessitated greater support than their non-disabled peers for a variety of reasons, either medical or practical. Moreover, the financial setbacks suffered by medical institutions and private therapy practices have renewed administrators’ and physicians’ interest in monetising the artworks produced under their supervision. Adding to those economic incentives, the rise of criminal and civil litigation in psychiatric cases has also encouraged the retention of patients’ work by therapists who feel compelled to keep comprehensive medical records to shield themselves from liability in the event of a lawsuit. Unspoken transactions, contracts, implied agreements and consent forms have thus progressively made their way into the relationship between those artists and their therapists or assistants, disregarding any notions of copyright. The question of artists’ authorship finds itself caught in an unusually multi-faceted web of issues formed by tightening purse strings, ethical concerns and the fear of civil or criminal liability. Whilst those issues are playing out behind closed doors, the popularity of what was once called the ‘Art of the Insane’ continues to grow and open new commercial avenues. This socio-economic context exacerbates the need to devise a legal framework able to help practitioners, artists and their advocates navigate through those issues in such a way that neither this minority nor our cultural heritage suffers from the fragmentation of the legal protection available to them.

Keywords: authorship, copyright law, intellectual disabilities, art therapy and mediation

Procedia PDF Downloads 132
42 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 104
41 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study

Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu

Abstract:

Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.

Keywords: dysphagia, teleassessment, challenges, Indian SLP

Procedia PDF Downloads 110
40 Effect of Velocity-Slip in Nanoscale Electroosmotic Flows: Molecular and Continuum Transport Perspectives

Authors: Alper T. Celebi, Ali Beskok

Abstract:

Electroosmotic (EO) slip flows in nanochannels are investigated using non-equilibrium molecular dynamics (MD) simulations, and the results are compared with analytical solution of Poisson-Boltzmann and Stokes (PB-S) equations with slip contribution. The ultimate objective of this study is to show that well-known continuum flow model can accurately predict the EO velocity profiles in nanochannels using the slip lengths and apparent viscosities obtained from force-driven flow simulations performed at various liquid-wall interaction strengths. EO flow of aqueous NaCl solution in silicon nanochannels are simulated under realistic electrochemical conditions within the validity region of Poisson-Boltzmann theory. A physical surface charge density is determined for nanochannels based on dissociations of silanol functional groups on channel surfaces at known salt concentration, temperature and local pH. First, we present results of density profiles and ion distributions by equilibrium MD simulations, ensuring that the desired thermodynamic state and ionic conditions are satisfied. Next, force-driven nanochannel flow simulations are performed to predict the apparent viscosity of ionic solution between charged surfaces and slip lengths. Parabolic velocity profiles obtained from force-driven flow simulations are fitted to a second-order polynomial equation, where viscosity and slip lengths are quantified by comparing the coefficients of the fitted equation with continuum flow model. Presence of charged surface increases the viscosity of ionic solution while the velocity-slip at wall decreases. Afterwards, EO flow simulations are carried out under uniform electric field for different liquid-wall interaction strengths. Velocity profiles present finite slips near walls, followed with a conventional viscous flow profile in the electrical double layer that reaches a bulk flow region in the center of the channel. The EO flow enhances with increased slip at the walls, which depends on wall-liquid interaction strength and the surface charge. MD velocity profiles are compared with the predictions from analytical solutions of the slip modified PB-S equation, where the slip length and apparent viscosity values are obtained from force-driven flow simulations in charged silicon nano-channels. Our MD results show good agreements with the analytical solutions at various slip conditions, verifying the validity of PB-S equation in nanochannels as small as 3.5 nm. In addition, the continuum model normalizes slip length with the Debye length instead of the channel height, which implies that enhancement in EO flows is independent of the channel height. Further MD simulations performed at different channel heights also shows that the flow enhancement due to slip is independent of the channel height. This is important because slip enhanced EO flow is observable even in micro-channels experiments by using a hydrophobic channel with large slip and high conductivity solutions with small Debye length. The present study provides an advanced understanding of EO flows in nanochannels. Correct characterization of nanoscale EO slip flow is crucial to discover the extent of well-known continuum models, which is required for various applications spanning from ion separation to drug delivery and bio-fluidic analysis.

Keywords: electroosmotic flow, molecular dynamics, slip length, velocity-slip

Procedia PDF Downloads 131
39 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.

Keywords: psychometric, parametric, domains, grids, therapists

Procedia PDF Downloads 252
38 Intelligent Materials and Functional Aspects of Shape Memory Alloys

Authors: Osman Adiguzel

Abstract:

Shape-memory alloys are a new class of functional materials with a peculiar property known as shape memory effect. These alloys return to a previously defined shape on heating after deformation in low temperature product phase region and take place in a class of functional materials due to this property. The origin of this phenomenon lies in the fact that the material changes its internal crystalline structure with changing temperature. Shape memory effect is based on martensitic transitions, which govern the remarkable changes in internal crystalline structure of materials. Martensitic transformation, which is a solid state phase transformation, occurs in thermal manner in material on cooling from high temperature parent phase region. This transformation is governed by changes in the crystalline structure of the material. Shape memory alloys cycle between original and deformed shapes in bulk level on heating and cooling, and can be used as a thermal actuator or temperature-sensitive elements due to this property. Martensitic transformations usually occur with the cooperative movement of atoms by means of lattice invariant shears. The ordered parent phase structures turn into twinned structures with this movement in crystallographic manner in thermal induced case. The twinned martensites turn into the twinned or oriented martensite by stressing the material at low temperature martensitic phase condition. The detwinned martensite turns into the parent phase structure on first heating, first cycle, and parent phase structures turn into the twinned and detwinned structures respectively in irreversible and reversible memory cases. On the other hand, shape memory materials are very important and useful in many interdisciplinary fields such as medicine, pharmacy, bioengineering, metallurgy and many engineering fields. The choice of material as well as actuator and sensor to combine it with the host structure is very essential to develop main materials and structures. Copper based alloys exhibit this property in metastable beta-phase region, which has bcc-based structures at high temperature parent phase field, and these structures martensitically turn into layered complex structures with lattice twinning following two ordered reactions on cooling. Martensitic transition occurs as self-accommodated martensite with inhomogeneous shears, lattice invariant shears which occur in two opposite directions, <110 > -type directions on the {110}-type plane of austenite matrix which is basal plane of martensite. This kind of shear can be called as {110}<110> -type mode and gives rise to the formation of layered structures, like 3R, 9R or 18R depending on the stacking sequences on the close-packed planes of the ordered lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper based alloys which have the chemical compositions in weight; Cu-26.1%Zn 4%Al and Cu-11%Al-6%Mn. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit super lattice reflections inherited from parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that locations and intensities of diffraction peaks change with the aging time at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close each other.

Keywords: Shape memory effect, martensite, twinning, detwinning, self-accommodation, layered structures

Procedia PDF Downloads 411
37 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment

Authors: Asif Ali, Daud Salim Faruquie

Abstract:

With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.

Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture

Procedia PDF Downloads 386
36 Assessing Brain Targeting Efficiency of Ionisable Lipid Nanoparticles Encapsulating Cas9 mRNA/gGFP Following Different Routes of Administration in Mice

Authors: Meiling Yu, Nadia Rouatbi, Khuloud T. Al-Jamal

Abstract:

Background: Treatment of neurological disorders with modern medical and surgical approaches remains difficult. Gene therapy, allowing the delivery of genetic materials that encodes potential therapeutic molecules, represents an attractive option. The treatment of brain diseases with gene therapy requires the gene-editing tool to be delivered efficiently to the central nervous system. In this study, we explored the efficiency of different delivery routes, namely intravenous (i.v.), intra-cranial (i.c.), and intra-nasal (i.n.), to deliver stable nucleic acid-lipid particles (SNALPs) containing gene-editing tools namely Cas9 mRNA and sgRNA encoding for GFP as a reporter protein. We hypothesise that SNALPs can reach the brain and perform gene-editing to different extents depending on the administration route. Intranasal administration (i.n.) offers an attractive and non-invasive way to access the brain circumventing the blood–brain barrier. Successful delivery of gene-editing tools to the brain offers a great opportunity for therapeutic target validation and nucleic acids therapeutics delivery to improve treatment options for a range of neurodegenerative diseases. In this study, we utilised Rosa26-Cas9 knock-in mice, expressing GFP, to study brain distribution and gene-editing efficiency of SNALPs after i.v.; i.c. and i.n. routes of administration. Methods: Single guide RNA (sgRNA) against GFP has been designed and validated by in vitro nuclease assay. SNALPs were formulated and characterised using dynamic light scattering. The encapsulation efficiency of nucleic acids (NA) was measured by RiboGreen™ assay. SNALPs were incubated in serum to assess their ability to protect NA from degradation. Rosa26-Cas9 knock-in mice were i.v., i.n., or i.c. administered with SNALPs to test in vivo gene-editing (GFP knockout) efficiency. SNALPs were given as three doses of 0.64 mg/kg sgGFP following i.v. and i.n. or a single dose of 0.25 mg/kg sgGFP following i.c.. knockout efficiency was assessed after seven days using Sanger Sequencing and Inference of CRISPR Edits (ICE) analysis. In vivo, the biodistribution of DiR labelled SNALPs (SNALPs-DiR) was assessed at 24h post-administration using IVIS Lumina Series III. Results: Serum-stable SNALPs produced were 130-140 nm in diameter with ~90% nucleic acid loading efficiency. SNALPs could reach and stay in the brain for up to 24h following i.v.; i.n. and i.c. administration. Decreasing GFP expression (around 50% after i.v. and i.c. and 20% following i.n.) was confirmed by optical imaging. Despite the small number of mice used, ICE analysis confirmed GFP knockout in mice brains. Additional studies are currently taking place to increase mice numbers. Conclusion: Results confirmed efficient gene knockout achieved by SNALPs in Rosa26-Cas9 knock-in mice expressing GFP following different routes of administrations in the following order i.v.= i.c.> i.n. Each of the administration routes has its pros and cons. The next stages of the project involve assessing gene-editing efficiency in wild-type mice and replacing GFP as a model target with therapeutic target genes implicated in Motor Neuron Disease pathology.

Keywords: CRISPR, nanoparticles, brain diseases, administration routes

Procedia PDF Downloads 72
35 Texture Characteristics and Depositional Environment of the Lower Mahi River Sediment, Mainland Gujarat, India

Authors: Shazi Farooqui, Anupam Sharma

Abstract:

The Mahi River (~600km long) is an important west flowing the river of Central India. It originates in Madhya Pradesh and starts flowing in NW direction and enters into the state of Rajasthan. It flows across southern Rajasthan and then enters into Gujarat and finally debouches in the Gulf of Cambay. In Gujarat state, it flows through all four geomorphic zones i.e. eastern upland zone, shallow buried piedmont zone, alluvial zone and coastal zone. In lower reaches and particularly when it is flowing under the coastal regime, it provides an opportunity to study – 1. Land–Sea interaction and role of relative sea level changes, 2. Coastal/estuarine geological process, 3. Landscape evolution in marginal areas and so on. The Late Quaternary deposits of Mainland Gujarat is appreciably studied by Chamyal and his group of MS University of Baroda, and they have established that the 30-35m thick sediment package of the Mainland Gujarat is comprised of marine, fluvial and aeolian sediments. It is also established that in the estuarine zone, the upper few meter thick sediments package is of marine nature. However, its thickness, characters and the depositional environment including the role of climate and tectonics is still not clearly defined. To understand few aspects of the above mentioned, in the present study, a 17m subsurface sediment core has been retrieved from the estuarine zone of Mahi river basin. The Multiproxy studies which include the textural analysis (grain size), Loss on ignition (LOI), Bulk and clay mineralogy and geochemical studies have been carried out. In the entire sedimentary sequence, the grain size largely varies from coarse sand to clay; however, a solitary gravel bed is also noticed. The lower part (depth 9-17m), is mainly comprised of sub equal proportion of sand and silt. The sediments mainly have bimodal and leptokurtic distribution and deposited in alternate sand-silt package, probably indicating flood deposits. Relatively low moisture (1.8%) and organic carbon (2.4%) with increased carbonate values (12%) indicate that conditions must have to remain oxidizing. The middle part (depth 9–6m) has a 1m thick gravel bed at the bottom and overlain by coarse sand to very fine sand showing fining upward sequence. The presence of gravel bed suggests some kind of tectonic activity resulting into change in base level or enhanced precipitation in the catchment region. The upper part (depth 6–0m; top part of sequence) mainly comprised of fine sand to silt size grains (with appreciable clay content). The sediment of this part is Unimodal and very leptokurtic in nature suggesting wave and winnowing process and deposited in low energy suspension environment. This part has relatively high moisture (2.1%) and organic carbon (2.7%) with decreased carbonate content (4.2%) indicating change in the depositional environment probably under estuarine conditions. The presence of chlorite along with smectite clay mineral further supports the significant marine contribution in the formation of upper part of the sequence.

Keywords: grain size, statistical analysis, clay minerals, late quaternary, LOI

Procedia PDF Downloads 162