Search results for: optical forces
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2927

Search results for: optical forces

257 The Concept of Dharma under Hindu, Buddhist and Sikh Religions: A Comparative Analysis

Authors: Venkateswarlu Kappara

Abstract:

The term ‘Dharma’ is complex and ubiquitous. It has no equivalent word in English Initially applied to Aryans. In Rig Veda, it appears in a number of places with different meanings. The word Dharma comes from the roots word ‘dhr’ (Dhri-Dharayatetiiti Dharmaha). Principles of Dharma are all pervading. The closest synonyms for Dharma in English is ‘Righteousness.’ In a holy book Mahabharata, it is mentioned that Dharma destroys those who destroy it, Dharma Protects those who protect it. Also, Dharma might be shadowed, now and then by evil forces, but at the end, Dharma always triumphs. This line embodies the eternal victory of good over evil. In Mahabharata, Lord Krishna says Dharma upholds both, this worldly and other worldly affairs. Rig Veda says, ‘O Indra! Lead us on the path of Rta, on the right path over all evils.’ For Buddhists, Dharma most often means the body of teachings expounded by the Buddha. The Dharma is one of the three Jewels (Tri Ratnas) of Buddhism under which the followers take refuge. They are: the ‘Buddha’ meaning the minds perfection or enlightenment, the Dharma, meaning the teachings and the methods of the Buddha, and the Sangha meaning those awakened people who provide guidance and support followers. Buddha denies a separate permanent ‘I.’ Buddha Accepts Suffering (Dukka). Change / impermanence (Anicca) and not– self (Annatta) Dharma in the Buddhist scriptures has a variety of meanings including ‘phenomenon’ and ‘nature’ or ‘characteristic.’ For Sikhs, the word ‘Dharma’ means the ‘path’ of righteousness’ The Sikh scriptures attempt to answer the exposition of Dharma. The main Holy Scripture of the Sikh religion is called the Guru Granth Sahib. The faithful people are fully bound to do whatever the Dharma wants them to do. Such is the name of the Immaculate Lord. Only one who has faith comes to know such a state of mind. The righteous judge of Dharma, by the Hukam of God’s Command, sits and Administers true justice. From Dharma flow wealth and pleasure. The study indicates that in Sikh religion, the Dharma is the path of righteousness; In Buddhism, the mind’s perfection of enlightenment, and in Hinduism, it is non-violence, purity, truth, control of senses, not coveting the property of others. The comparative study implies that all religions dealt with Dharma for welfare of the mankind. The methodology adapted is theoretical, analytical and comparative. The present study indicates how far Indian philosophical systems influenced the present circumstances and how far the present system is not compatible with Ancient philosophical systems. A tentative generalization would be that the present system which is mostly influenced by the British Governance may not totally reflect the ancient norms. However, the mental make-up continues to be influenced by Ancient philosophical systems.

Keywords: Dharma, Dukka (suffering), Rakshati, righteous

Procedia PDF Downloads 164
256 Review on Recent Dynamics and Constraints of Affordable Housing Provision in Nigeria: A Case of Growing Economic Precarity

Authors: Ikenna Stephen Ezennia, Sebnem Onal Hoscara

Abstract:

Successive governments in Nigeria are faced with the pressing problem of how to house an ever-expanding urban population, usually low-income earners. The question of housing and affordability presents a complex challenge for these governments, as the commodification of housing links it inextricably to markets and capital flows. Therefore, placing it as at the center of the government’s agenda. However, the provision of decent and affordable housing for average Nigerians has remained an illusion, despite copious schemes, policies and programs initiated and carried out by various successive governments. Similarly, this phenomenon has also been observed in many countries of Africa, which is largely a result of economic unpredictability, lack of housing finance and insecurity, among other factors peculiar to a struggling economy. This study reviews recent dynamics and factors challenging the provision and development of affordable housing for the low income urban populace of Nigeria. Thus, the aim of the study is to present a comprehensive approach for understanding recent trends in the provision of affordable housing for Nigerians. The approach is based on a new paradigm of research: transdisciplinarity; a form of inquiry that crosses the boundaries of different disciplines. Therefore, the review takes a retrospective gaze at the various housing development programs/schemes/policies taken by successive governments of Nigeria within the last few decades and exams recent efforts geared towards eradicating the problems of housing delivery. Sources of data included relevant English language articles and the results of literature search of Elsevier Science Direct, ISI Web of Knowledge, Pro Quest Central, Scopus, and Google Scholar. The findings reveal that factors such as; rapid urbanization, inadequate planning and land use control, lack of adequate and favorable finance, high prices of land, high prices of building material, youth/touts harassment of developers, poor urban infrastructure, multiple taxation, and risk share are the major factors posing as a hindrance to adequate housing delivery. The results show that the majority of Nigeria’s affordable housing schemes, programs and policies are in most cases poorly implemented and abandoned without proper coordination. Consequently, the study concludes that the affordable housing delivery strategies in Nigeria are an epitome of lip service politics by successive governments; and the current trend of leaving housing provision to the vagaries of market forces cannot be expected to support affordable housing especially for the low income urban populace.

Keywords: affordable housing, housing delivery, national housing policy, urban poor

Procedia PDF Downloads 215
255 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition

Authors: M. Beusink, E. W. C. Coenen

Abstract:

The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.

Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures

Procedia PDF Downloads 232
254 High Strength, High Toughness Polyhydroxybutyrate-Co-Valerate Based Biocomposites

Authors: S. Z. A. Zaidi, A. Crosky

Abstract:

Biocomposites is a field that has gained much scientific attention due to the current substantial consumption of non-renewable resources and the environmentally harmful disposal methods required for traditional polymer composites. Research on natural fiber reinforced polyhydroxyalkanoates (PHAs) has gained considerable momentum over the past decade. There is little work on PHAs reinforced with unidirectional (UD) natural fibers and little work on using epoxidized natural rubber (ENR) as a toughening agent for PHA-based biocomposites. In this work, we prepared polyhydroxybutyrate-co-valerate (PHBV) biocomposites reinforced with UD 30 wt.% flax fibers and evaluated the use of ENR with 50% epoxidation (ENR50) as a toughening agent for PHBV biocomposites. Quasi-unidirectional flax/PHBV composites were prepared by hand layup, powder impregnation followed by compression molding.  Toughening agents – polybutylene adiphate-co-terephthalate (PBAT) and ENR50 – were cryogenically ground into powder and mechanically mixed with main matrix PHBV to maintain the powder impregnation process. The tensile, flexural and impact properties of the biocomposites were measured and morphology of the composites examined using optical microscopy (OM) and scanning electron microscopy (SEM). The UD biocomposites showed exceptionally high mechanical properties as compared to the results obtained previously where only short fibers have been used. The improved tensile and flexural properties were attributed to the continuous nature of the fiber reinforcement and the increased proportion of fibers in the loading direction. The improved impact properties were attributed to a larger surface area for fiber-matrix debonding and for subsequent sliding and fiber pull-out mechanisms to act on, allowing more energy to be absorbed. Coating cryogenically ground ENR50 particles with PHBV powder successfully inhibits the self-healing nature of ENR-50, preventing particles from coalescing and overcoming problems in mechanical mixing, compounding and molding. Cryogenic grinding, followed by powder impregnation and subsequent compression molding is an effective route to the production of high-mechanical-property biocomposites based on renewable resources for high-obsolescence applications such as plastic casings for consumer electronics.

Keywords: natural fibers, natural rubber, polyhydroxyalkanoates, unidirectional

Procedia PDF Downloads 285
253 Influence of Thermal Ageing on Microstructural Features and Mechanical Properties of Reduced Activation Ferritic/Martensitic Grades

Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma

Abstract:

Reduced Activation Ferritic/Martensitic (FM) steels like EUROFER are of interest for first wall application in the future demonstration (DEMO) fusion reactor. Depending on the final design codes for the DEMO reactor, the first wall material will have to function in low-temperature mode or high-temperature mode, i.e. around 250-300°C of above 550°C respectively. However, the use of RAFM steels is limited up to a temperature of about 550°C. For the low-temperature application, the material suffers from irradiation embrittlement, due to a shift of ductile-to-brittle transition temperature (DBTT) towards higher temperatures upon irradiation. The high-temperature response of the material is equally insufficient for long-term use in fusion reactors, due to the instability of the matrix phase and coarsening of the precipitates at prolonged high-temperature exposure. The objective of this study is to investigate the influence of thermal ageing for 1000 hrs and 4000 hrs on microstructural features and mechanical properties of lab-cast EUROFER. Additionally, the ageing behavior of the lab-cast EUROFER is compared with the ageing behavior of standard EUROFER97-2 and T91. The microstructural features were investigated with light optical microscopy (LOM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the microstructural features and mechanical properties of four different F/M grades, i.e. T91, EUROFER97-2 and two lab-casted EUROFER grades. After ageing for 1000 hrs, the microstructures exhibit similar martensitic block sizes independent on the grain size before ageing. With respect to the initial coarser microstructures, the aged microstructures displayed a dislocation structure which is partially fragmented by polygonization. On the other hand, the initial finer microstructures tend to be more stable up to 1000hrs resulting in similar grain sizes for the four different steels. Increasing the ageing time to 4000 hrs, resulted in an increase of lath thickness and coarsening of M23C6 precipitates leading to a deterioration of tensile properties.

Keywords: ageing experiments, EUROFER, ferritic/martensitic steels, mechanical properties, microstructure, T91

Procedia PDF Downloads 259
252 Different Processing Methods to Obtain a Carbon Composite Element for Cycling

Authors: Maria Fonseca, Ana Branco, Joao Graca, Rui Mendes, Pedro Mimoso

Abstract:

The present work is focused on the production of a carbon composite element for cycling through different techniques, namely, blow-molding and high-pressure resin transfer injection (HP-RTM). The main objective of this work is to compare both processes to produce carbon composite elements for the cycling industry. It is well known that the carbon composite components for cycling are produced mainly through blow-molding; however, this technique depends strongly on manual labour, resulting in a time-consuming production process. Comparatively, HP-RTM offers a more automated process which should lead to higher production rates. Nevertheless, a comparison of the elements produced through both techniques must be done, in order to assess if the final products comply with the required standards of the industry. The main difference between said techniques lies in the used material. Blow-moulding uses carbon prepreg (carbon fibres pre-impregnated with a resin system), and the material is laid up by hand, piece by piece, on a mould or on a hard male. After that, the material is cured at a high temperature. On the other hand, in the HP-RTM technique, dry carbon fibres are placed on a mould, and then resin is injected at high pressure. After some research regarding the best material systems (prepregs and braids) and suppliers, an element was designed (similar to a handlebar) to be constructed. The next step was to perform FEM simulations in order to determine what the best layup of the composite material was. The simulations were done for the prepreg material, and the obtained layup was transposed to the braids. The selected material was a prepreg with T700 carbon fibre (24K) and an epoxy resin system, for the blow-molding technique. For HP-RTM, carbon fibre elastic UD tubes and ± 45º braids were used, with both 3K and 6K filaments per tow, and the resin system was an epoxy as well. After the simulations for the prepreg material, the optimized layup was: [45°, -45°,45°, -45°,0°,0°]. For HP-RTM, the transposed layup was [ ± 45° (6k); 0° (6k); partial ± 45° (6k); partial ± 45° (6k); ± 45° (3k); ± 45° (3k)]. The mechanical tests showed that both elements can withstand the maximum load (in this case, 1000 N); however, the one produced through blow-molding can support higher loads (≈1300N against 1100N from HP-RTM). In what concerns to the fibre volume fraction (FVF), the HP-RTM element has a slightly higher value ( > 61% compared to 59% of the blow-molding technique). The optical microscopy has shown that both elements have a low void content. In conclusion, the elements produced using HP-RTM can compare to the ones produced through blow-molding, both in mechanical testing and in the visual aspect. Nevertheless, there is still space for improvement in the HP-RTM elements since the layup of the braids, and UD tubes could be optimized.

Keywords: HP-RTM, carbon composites, cycling, FEM

Procedia PDF Downloads 130
251 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X

Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira

Abstract:

An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.

Keywords: boundary-layer, scramjet, simple algorithm, shock wave

Procedia PDF Downloads 484
250 Queer Anti-Urbanism: An Exploration of Queer Space Through Design

Authors: William Creighton, Jan Smitheram

Abstract:

Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?

Keywords: queer, queer anti-urbanism, design as research, design

Procedia PDF Downloads 171
249 Unmanned Aerial System Development for the Remote Reflectance Sensing Using Above-Water Radiometers

Authors: Sunghun Jung, Wonkook Kim

Abstract:

Due to the difficulty of the utilization of satellite and an aircraft, conventional ocean color remote sensing has a disadvantage in that it is difficult to obtain images of desired places at desired times. These disadvantages make it difficult to capture the anomalies such as the occurrence of the red tide which requires immediate observation. It is also difficult to understand the phenomena such as the resuspension-precipitation process of suspended solids and the spread of low-salinity water originating in the coastal areas. For the remote sensing reflectance of seawater, above-water radiometers (AWR) have been used either by carrying portable AWRs on a ship or installing those at fixed observation points on the Ieodo ocean research station, Socheongcho base, and etc. In particular, however, it requires the high cost to measure the remote reflectance in various seawater environments at various times and it is even not possible to measure it at the desired frequency in the desired sea area at the desired time. Also, in case of the stationary observation, it is advantageous that observation data is continuously obtained, but there is the disadvantage that data of various sea areas cannot be obtained. It is possible to instantly capture various marine phenomena occurring on the coast using the unmanned aerial system (UAS) including vertical takeoff and landing (VTOL) type unmanned aerial vehicles (UAV) since it could move and hover at the one location and acquire data of the desired form at a high resolution. To remotely estimate seawater constituents, it is necessary to install an ultra-spectral sensor. Also, to calculate reflected light from the surface of the sea in consideration of the sun’s incident light, a total of three sensors need to be installed on the UAV. The remote sensing reflectance of seawater is the most basic optical property for remotely estimating color components in seawater and we could remotely estimate the chlorophyll concentration, the suspended solids concentration, and the dissolved organic amount. Estimating seawater physics from the remote sensing reflectance requires the algorithm development using the accumulation data of seawater reflectivity under various seawater and atmospheric conditions. The UAS with three AWRs is developed for the remote reflection sensing on the surface of the sea. Throughout the paper, we explain the details of each UAS component, system operation scenarios, and simulation and experiment results. The UAS consists of a UAV, a solar tracker, a transmitter, a ground control station (GCS), three AWRs, and two gimbals.

Keywords: above-water radiometers (AWR), ground control station (GCS), unmanned aerial system (UAS), unmanned aerial vehicle (UAV)

Procedia PDF Downloads 159
248 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning

Authors: R. Abdulrahman, A. Eardley, A. Soliman

Abstract:

The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.

Keywords: mobile learning, nursing institute students’ acceptance of m-learning activity in Saudi Arabia, unified theory of acceptance and use of technology model (UTAUT), structural equation modelling (SEM)

Procedia PDF Downloads 182
247 Social Mobility and Urbanization: Case Study of Well-Educated Urban Migrant's Life Experience in the Era of China's New Urbanization Project

Authors: Xu Heng

Abstract:

Since the financial crisis of 2008 and the resulting Great Recession, the number of China’s unemployed college graduate reached over 500 thousand in 2011. Following the severe situation of college graduate employment, there has been growing public concern about college graduates, especially those with the less-privileged background, and their working and living condition in metropolises. Previous studies indicate that well-educated urban migrants with less-privileged background tend to obtain temporary occupation with less financial income and lower social status. Those vulnerable young migrants are described as ‘Ant Tribe’ by some scholars. However, since the implementation of a new urbanization project, together with the relaxed Hukou system and the acceleration of socio-economic development in middle/small cities, some researchers described well-educated urban migrant’s situation and the prospect of upward social mobility in urban areas in an overly optimistic light. In order to shed more lights on the underlying tensions encountered by China’s well-educated urban migrants in their upward social mobility pursuit, this research mainly focuses on 10 well-educated urban migrants’ life trajectories between their university-to-work transition and their current situation. All selected well-educated urban migrants are young adults with rural background who have already received higher education qualification from first-tier universities of Wuhan City (capital of Hubei Province). Drawing on the in-depth interviews with 10 participants and Inspired by Lahire’s Theory of Plural Actor, this study yields the following preliminary findings; 1) For those migrants who move to super-mega cities (i.e., Beijing, Shenzhen, Guangzhou) or stay in Wuhan after college graduation, their inadequacies of economic and social capital are the structural factors which negatively influence their living condition and further shape their plan for career development. The incompatibility between the sub-fields of urban life and the disposition, which generated from their early socialization, is the main cause for marginalized position in the metropolises. 2) For those migrants who move back to middle/small cities located in their hometown regions, the inconsistency between the disposition, which generated from college life, and the organizational habitus of the workplace is the main cause for their sense of ‘fish out of water’, even though they have obtained the stable occupation of local government or state-owned enterprise. On the whole, this research illuminates how the underlying the structural forces shape well-educated urban migrants’ life trajectories and hinder their upward social mobility under the context of new urbanization project.

Keywords: life trajectory, social mobility, urbanization, well-educated urban migrant

Procedia PDF Downloads 211
246 Valorization of Mineralogical Byproduct TiO₂ Using Photocatalytic Degradation of Organo-Sulfur Industrial Effluent

Authors: Harish Kuruva, Vedasri Bai Khavala, Tiju Thomas, K. Murugan, B. S. Murty

Abstract:

Industries are growing day to day to increase the economy of the country. The biggest problem with industries is wastewater treatment. Releasing these wastewater directly into the river is more harmful to human life and a threat to aquatic life. These industrial effluents contain many dissolved solids, organic/inorganic compounds, salts, toxic metals, etc. Phenols, pesticides, dioxins, herbicides, pharmaceuticals, and textile dyes were the types of industrial effluents and more challenging to degrade eco-friendly. So many advanced techniques like electrochemical, oxidation process, and valorization have been applied for industrial wastewater treatment, but these are not cost-effective. Industrial effluent degradation is complicated compared to commercially available pollutants (dyes) like methylene blue, methylene orange, rhodamine B, etc. TiO₂ is one of the widely used photocatalysts which can degrade organic compounds using solar light and moisture available in the environment (organic compounds converted to CO₂ and H₂O). TiO₂ is widely studied in photocatalysis because of its low cost, non-toxic, high availability, and chemically and physically stable in the atmosphere. This study mainly focused on valorizing the mineralogical product TiO₂ (IREL, India). This mineralogical graded TiO₂ was characterized and compared with its structural and photocatalytic properties (industrial effluent degradation) with the commercially available Degussa P-25 TiO₂. It was testified that this mineralogical TiO₂ has the best photocatalytic properties (particle shape - spherical, size - 30±5 nm, surface area - 98.19 m²/g, bandgap - 3.2 eV, phase - 95% anatase, and 5% rutile). The industrial effluent was characterized by TDS (total dissolved solids), ICP-OES (inductively coupled plasma – optical emission spectroscopy), CHNS (Carbon, Hydrogen, Nitrogen, and sulfur) analyzer, and FT-IR (fourier-transform infrared spectroscopy). It was observed that it contains high sulfur (S=11.37±0.15%), organic compounds (C=4±0.1%, H=70.25±0.1%, N=10±0.1%), heavy metals, and other dissolved solids (60 g/L). However, the organo-sulfur industrial effluent was degraded by photocatalysis with the industrial mineralogical product TiO₂. In this study, the industrial effluent pH value (2.5 to 10), catalyst concentration (50 to 150 mg) were varied, and effluent concentration (0.5 Abs) and light exposure time (2 h) were maintained constant. The best degradation is about 80% of industrial effluent was achieved at pH 5 with a concentration of 150 mg - TiO₂. The FT-IR results and CHNS analyzer confirmed that the sulfur and organic compounds were degraded.

Keywords: wastewater treatment, industrial mineralogical product TiO₂, photocatalysis, organo-sulfur industrial effluent

Procedia PDF Downloads 111
245 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 200
244 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach

Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal

Abstract:

Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.

Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol

Procedia PDF Downloads 105
243 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂

Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang

Abstract:

CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.

Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces

Procedia PDF Downloads 275
242 The Ideal Memory Substitute for Computer Memory Hierarchy

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.

Keywords: cache, memory-hierarchy, memory, registers, storage

Procedia PDF Downloads 157
241 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636

Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa

Abstract:

Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.

Keywords: bioleaching, metal removal, spent catalysts, Acidithiobacillus thiooxidans

Procedia PDF Downloads 137
240 Enhanced Photocatalytic Activities of TiO2/Ag2O Heterojunction Nanotubes Arrays Obtained by Electrochemical Method

Authors: Magdalena Diaka, Paweł Mazierski, Joanna Żebrowska, Michał Winiarski, Tomasz Klimczuk, Adriana Zaleska-Medynska

Abstract:

During the last years, TiO2 nanotubes have been widely studied due to their unique highly ordered array structure, unidirectional charge transfer and higher specific surface area compared to conventional TiO2 powder. These photoactive materials, in the form of thin layer, can be activated by low powered and low cost irradiation sources (such as LEDs) to remove VOCs, microorganism and to deodorize air streams. This is possible due to their directly growth on a support material and high surface area, which guarantee enhanced photon absorption together with an extensive adsorption of reactant molecules on the photocatalyst surface. TiO2 nanotubes exhibit also lots of other attractive properties, such as potential enhancement of electron percolation pathways, light conversion, and ion diffusion at the semiconductor-electrolyte interface. Pure TiO2 nanotubes were previously used to remove organic compounds from the gas phase as well as in water splitting reaction. The major factors limiting the use of TiO2 nanotubes, which have not been fully overcome, are their relatively large band gap (3-3,2 eV) and high recombination rate of photogenerated electron–hole pairs. Many different strategies were proposed to solve this problem, however titania nanostructures containing incorporated metal oxides like Ag2O shows very promising, new optical and photocatalytic properties. Unfortunately, there is still very limited number of reports regarding application of TiO2/MxOy nanostructures. In the present work, we prepared TiO2/Ag2O nanotubes obtained by anodization of Ti-Ag alloys containing 5, 10 and 15 wt. % Ag. Photocatalysts prepared in this way were characterized by X-ray diffraction spectroscopy (XRD), scanning electron microscopy (SEM), luminescence spectroscopy and UV-Vis spectroscopy. The activities of new TiO2/Ag2O were examined by photocatalytic degradation of toluene in gas phase reaction and phenol in aqueous phase using 1000 W Xenon lamp (Oriel) and light emitting diodes (LED) as a irradiation sources. Additionally efficiency of bacteria (Pseudomonas aeruginosa) removal from the gas phase was estimated. The number of surviving bacteria was determined by the serial twofold dilution microtiter plate method, in Tryptic Soy Broth medium (TSB, GibcoBRL).

Keywords: photocatalysis, antibacterial properties, titania nanotubes, new TiO2/MxOy nanostructures

Procedia PDF Downloads 289
239 Restructurasation of the Concept of Empire in the Social Consciousness of Modern Americans

Authors: Maxim Kravchenko

Abstract:

The paper looks into the structure and contents of the concept of empire in the social consciousness of modern Americans. To construct the model of this socially and politically relevant concept we have conducted an experiment with respondents born and living in the USA. Empire is seen as a historic notion describing such entities as the British empire, the Russian empire, the Ottoman empire and others. It seems that the democratic regime adopted by most countries worldwide is incompatible with imperial status of a country. Yet there are countries which tend to dominate in the contemporary world and though they are not routinely referred to as empires, in many respects they are reminiscent of historical empires. Thus, the central hypothesis of the study is that the concept of empire is cultivated in some states through the intermediary of the mass media though it undergoes a certain transformation to meet the expectations of a democratic society. The transformation implies that certain components which were historically embedded in its structure are drawn to the margins of the hierarchical structure of the concept whereas other components tend to become central to the concept. This process can be referred to as restructuration of the concept of empire. To verify this hypothesis we have conducted a study which falls into two stages. First we looked into the definition of empire featured in dictionaries, the dominant conceptual components of empire are: importance, territory/lands, recognition, independence, authority/power, supreme/absolute. However, the analysis of 100 articles from American newspapers chosen at random revealed that authors rarely use the word «empire» in its basic meaning (7%). More often «empire» is used when speaking about countries, which no longer exist or when speaking about some corporations (like Apple or Google). At the second stage of the study we conducted an associative experiment with the citizens of the USA aged 19 to 45. The purpose of the experiment was to find out the dominant components of the concept of empire and to construct the model of the transformed concept. The experiment stipulated that respondents should give the first association, which crosses their mind, on reading such stimulus phrases as “strong military”, “strong economy” and others. The list of stimuli features various words and phrases associated with empire including the words representing the dominant components of the concept of empire. Then the associations provided by the respondents were classified into thematic clusters. For instance, the associations to the stimulus “strong military” were compartmentalized into three groups: 1) a country with strong military forces (North Korea, the USA, Russia, China); 2) negative impression of strong military (war, anarchy, conflict); 3) positive impression of strong military (peace, safety, responsibility). The experiment findings suggest that the concept of empire is currently undergoing a transformation which brings about a number of changes. Among them predominance of positively assessed components of the concept; emergence of two poles in the structure of the concept, that is “hero” vs. “enemy”; marginalization of any negatively assessed components.

Keywords: associative experiment, conceptual components, empire, restructurasation of the concept

Procedia PDF Downloads 310
238 Migration, Labour Market, Capital Formation, and Social Security: A Study of Livelihoods of the Urban Poor in Two Different Cities of West Bengal in India

Authors: Arup Pramanik

Abstract:

Most of the cities in the developing countries like Siliguri Municipal Corporation Area (SMCA) and Raiganj Municipality (RM) in West Bengal, India are changing typically in terms of demographic, economic and social relationship due to rapid pace of urbanization. The mushrooming growth of slums in SMCA and RM is the direct consequence of urbanization and migration due to regional imbalance, unbalanced growth process which is posing a serious threat to sustainable development of the country. Almost all the slums happen to be a breeding ground for poverty, negligence, and disease. Unpredictable growth of slums and poverty alleviation has now become a serious challenge to the global and national policy makers for the development of the slum dwellers. The ethical dimension of the poor in the cities like SMCA and RM stands on equal opportunities, inclusive and harmonious living without discrimination of any kind. But, the migrant slum dwellers in SMCA and RM do not possess high skill or education to enable them to find well paid employment in the formal sector and the surplus urban labour force is compelled to generate its own means of employment and survival in the informal sector. The survey data of the households has been analysedin terms of percentage, descriptive statistics which includes mean, Standard Deviation (SD), ANOVA (Mean Difference) etc., to analyse the socio economic variables of the households. The study shows that the migrant labour forces living in the slums are derived from the social security measures in both the municipal areas of SMCA and RM. The urban poor in the cities of SMCA and RM rely heavily on social capital amongst all the capital assets to help them ‘get by’ and ‘get ahead’. Despite, the slum dwellers in the study areas are vulnerable with respect to other determinants of capital assets. It is noteworthy that Indian plans of anti-poverty programmes was in a proper place even after the neo-liberal regime, where the basic idea behind the massive shift of various welfare and service oriented strategy to poverty reduction strategy for the benefit of the urban poor with the trickle down effects. But, the overall impact of the trickledown effect was unsatisfactory. The objective of the Paper is to assess the magnitude of migration and absorption in the urban labour market. Issues relating to capital formation, social security measures and the support of the Welfare State in order to meet 'Sustainable Development Goals'. This study also highlights the quality of life of urban poor migrants in terms of capital formation and livelihoods.

Keywords: migration, slums, labour market, capital formation, social security

Procedia PDF Downloads 114
237 Study of the Combinatorial Impact of Substrate Properties on Mesenchymal Stem Cell Migration Using Microfluidics

Authors: Nishanth Venugopal Menon, Chuah Yon Jin, Samantha Phey, Wu Yingnan, Zhang Ying, Vincent Chan, Kang Yuejun

Abstract:

Cell Migration is a vital phenomenon that the cells undergo in various physiological processes like wound healing, disease progression, embryogenesis, etc. Cell migration depends primarily on the chemical and physical cues available in the cellular environment. The chemical cue involves the chemokines secreted and gradients generated in the environment while physical cues indicate the impact of matrix properties like nanotopography and stiffness on the cells. Mesenchymal Stem Cells (MSCs) have been shown to have a role wound healing in vivo and its migration to the site of the wound has been shown to have a therapeutic effect. In the field of stem cell based tissue regeneration of bones and cartilage, one approach has been to introduce scaffold laden with MSCs into the site of injury to enable tissue regeneration. In this work, we have studied the combinatorial impact of the substrate physical properties on MSC migration. A microfluidic in vitro model was created to perform the migration studies. The microfluidic model used is a three compartment device consisting of two cell seeding compartments and one migration compartment. Four different PDMS substrates with varying substrate roughness, stiffness and hydrophobicity were created. Its surface roughness and stiffness was measured using Atomic Force Microscopy (AFM) while its hydrphobicity was measured from the water contact angle using an optical tensiometer. These PDMS substrates are sealed to the microfluidic chip following which the MSCs are seeded and the cell migration is studied over the period of a week. Cell migration was quantified using fluorescence imaging of the cytoskeleton (F-actin) to find out the area covered by the cells inside the migration compartment. The impact of adhesion proteins on cell migration was also quantified using a real-time polymerase chain reaction (qRT PCR). These results suggested that the optimal substrate for cell migration would be one with an intermediate level of roughness, stiffness and hydrophobicity. A higher or lower value of these properties affected cell migration negatively. These observations have helped us in understanding that different substrate properties need to be considered in tandem, especially while designing scaffolds for tissue regeneration as cell migration is normally impacted by the combinatorial impact of the matrix. These observations may lead us to scaffold optimization in future tissue regeneration applications.

Keywords: cell migration, microfluidics, in vitro model, stem cell migration, scaffold, substrate properties

Procedia PDF Downloads 553
236 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection

Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng

Abstract:

Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.

Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric

Procedia PDF Downloads 441
235 Analysis and Modeling of Graphene-Based Percolative Strain Sensor

Authors: Heming Yao

Abstract:

Graphene-based percolative strain gauges could find applications in many places such as touch panels, artificial skins or human motion detection because of its advantages over conventional strain gauges such as flexibility and transparency. These strain gauges rely on a novel sensing mechanism that depends on strain-induced morphology changes. Once a compression or tension strain is applied to Graphene-based percolative strain gauges, the overlap area between neighboring flakes becomes smaller or larger, which is reflected by the considerable change of resistance. Tiny strain change on graphene-based percolative strain sensor can act as an important leverage to tremendously increase resistance of strain sensor, which equipped graphene-based percolative strain gauges with higher gauge factor. Despite ongoing research in the underlying sensing mechanism and the limits of sensitivity, neither suitable understanding has been obtained of what intrinsic factors play the key role in adjust gauge factor, nor explanation on how the strain gauge sensitivity can be enhanced, which is undoubtedly considerably meaningful and provides guideline to design novel and easy-produced strain sensor with high gauge factor. We here simulated the strain process by modeling graphene flakes and its percolative networks. We constructed the 3D resistance network by simulating overlapping process of graphene flakes and interconnecting tremendous number of resistance elements which were obtained by fractionizing each piece of graphene. With strain increasing, the overlapping graphenes was dislocated on new stretched simulation graphene flake simulation film and a new simulation resistance network was formed with smaller flake number density. By solving the resistance network, we can get the resistance of simulation film under different strain. Furthermore, by simulation on possible variable parameters, such as out-of-plane resistance, in-plane resistance, flake size, we obtained the changing tendency of gauge factor with all these variable parameters. Compared with the experimental data, we verified the feasibility of our model and analysis. The increase of out-of-plane resistance of graphene flake and the initial resistance of sensor, based on flake network, both improved gauge factor of sensor, while the smaller graphene flake size gave greater gauge factor. This work can not only serve as a guideline to improve the sensitivity and applicability of graphene-based strain sensors in the future, but also provides method to find the limitation of gauge factor for strain sensor based on graphene flake. Besides, our method can be easily transferred to predict gauge factor of strain sensor based on other nano-structured transparent optical conductors, such as nanowire and carbon nanotube, or of their hybrid with graphene flakes.

Keywords: graphene, gauge factor, percolative transport, strain sensor

Procedia PDF Downloads 415
234 Tuning the Emission Colour of Phenothiazine by Introduction of Withdrawing Electron Groups

Authors: Andrei Bejan, Luminita Marin, Dalila Belei

Abstract:

Phenothiazine with electron-rich nitrogen and sulfur heteroatoms has a high electron-donating ability which promotes a good conjugation and therefore low band-gap with consequences upon charge carrier mobility improving and shifting of light emission in visible domain. Moreover, its non-planar butterfly conformation inhibits molecular aggregation and thus preserves quite well the fluorescence quantum yield in solid state compared to solution. Therefore phenothiazine and its derivatives are promising hole transport materials for use in organic electronic and optoelectronic devices as light emitting diodes, photovoltaic cells, integrated circuit sensors or driving circuits for large area display devices. The objective of this paper was to obtain a series of new phenothiazine derivatives by introduction of different electron withdrawing substituents as formyl, carboxyl and cyanoacryl units in order to create a push pull system which has potential to improve the electronic and optical properties. Bromine atom was used as electrono-donor moiety to extend furthermore the existing conjugation. The understudy compounds were structural characterized by FTIR and 1H-NMR spectroscopy and single crystal X-ray diffraction. Besides, the single crystal X-ray diffraction brought information regarding the supramolecular architecture of the compounds. Photophysical properties were monitored by UV-vis and photoluminescence spectroscopy, while the electrochemical behavior was established by cyclic voltammetry. The absorption maxima of the studied compounds vary in a large range (322-455 nm), reflecting the different electronic delocalization degree, depending by the substituent nature. In a similar manner, the emission spectra reveal different color of emitted light, a red shift being evident for the groups with higher electron withdrawing ability. The emitted light is pure and saturated for the compounds containing strong withdrawing formyl or cyanoacryl units and reach the highest quantum yield of 71% for the compound containing bromine and cyanoacrilic units. Electrochemical study show reversible oxidative and reduction processes for all the compounds and a close correlation of the HOMO-LUMO band gap with substituent nature. All these findings suggest the obtained compounds as promising materials for optoelectronic devices.

Keywords: electrochemical properties, phenothiazine derivatives, photoluminescence, quantum yield

Procedia PDF Downloads 328
233 Tuning the Surface Roughness of Patterned Nanocellulose Films: An Alternative to Plastic Based Substrates for Circuit Priniting in High-Performance Electronics

Authors: Kunal Bhardwaj, Christine Browne

Abstract:

With the increase in global awareness of the environmental impacts of plastic-based products, there has been a massive drive to reduce our use of these products. Use of plastic-based substrates in electronic circuits has been a matter of concern recently. Plastics provide a very smooth and cheap surface for printing high-performance electronics due to their non-permeability to ink and easy mouldability. In this research, we explore the use of nano cellulose (NC) films in electronics as they provide an advantage of being 100% recyclable and eco-friendly. The main hindrance in the mass adoption of NC film as a substitute for plastic is its higher surface roughness which leads to ink penetration, and dispersion in the channels on the film. This research was conducted to tune the RMS roughness of NC films to a range where they can replace plastics in electronics(310-470nm). We studied the dependence of the surface roughness of the NC film on the following tunable aspects: 1) composition by weight of the NC suspension that is sprayed on a silicon wafer 2) the width and the depth of the channels on the silicon wafer used as a base. Various silicon wafers with channel depths ranging from 6 to 18 um and channel widths ranging from 5 to 500um were used as a base. Spray coating method for NC film production was used and two solutions namely, 1.5wt% NC and a 50-50 NC-CNC (cellulose nanocrystal) mixture in distilled water, were sprayed through a Wagner sprayer system model 117 at an angle of 90 degrees. The silicon wafer was kept on a conveyor moving at a velocity of 1.3+-0.1 cm/sec. Once the suspension was uniformly sprayed, the mould was left to dry in an oven at 50°C overnight. The images of the films were taken with the help of an optical profilometer, Olympus OLS 5000. These images were converted into a ‘.lext’ format and analyzed using Gwyddion, a data and image analysis software. Lowest measured RMS roughness of 291nm was with a 50-50 CNC-NC mixture, sprayed on a silicon wafer with a channel width of 5 µm and a channel depth of 12 µm. Surface roughness values of 320+-17nm were achieved at lower (5 to 10 µm) channel widths on a silicon wafer. This research opened the possibility of the usage of 100% recyclable NC films with an additive (50% CNC) in high-performance electronics. Possibility of using additives like Carboxymethyl Cellulose (CMC) is also being explored due to the hypothesis that CMC would reduce friction amongst fibers, which in turn would lead to better conformations amongst the NC fibers. CMC addition would thus be able to help tune the surface roughness of the NC film to an even greater extent in future.

Keywords: nano cellulose films, electronic circuits, nanocrystals and surface roughness

Procedia PDF Downloads 121
232 Poly(ε-caprolactone)/Halloysite Nanotube Nanocomposites Scaffolds for Tissue Engineering

Authors: Z. Terzopoulou, I. Koliakou, D. Bikiaris

Abstract:

Tissue engineering offers a new approach to regenerate diseased or damaged tissues such as bone. Great effort is devoted to eliminating the need of removing non-degradable implants at the end of their life span, with biodegradable polymers playing a major part. Poly(ε-caprolactone) (PCL) is one of the best candidates for this purpose due to its high permeability, good biodegradability and exceptional biocompatibility, which has stimulated extensive research into its potential application in the biomedical fields. However, PCL degrades much slower than other known biodegradable polymers and has a total degradation of 2-4 years depending on the initial molecular weight of the device. This is due to its relatively hydrophobic character and high crystallinity. Consequently, much attention has been given to the tunable degradation of PCL to meet the diverse requirements of biomedicine. Poly(ε-caprolactone) (PCL) is a biodegradable polyester that lacks bioactivity, so when used in bone tissue engineering, new bone tissue cannot bond tightly on the polymeric surface. Therefore, it is important to incorporate reinforcing fillers into PCL matrix in order to result in a promising combination of bioactivity, biodegradability, and strength. Natural clay halloysite nanotubes (HNTs) were incorporated into PCL polymeric matrix, via in situ ring-opening polymerization of caprolactone, in concentrations 0.5, 1 and 2.5 wt%. Both unmodified and modified with aminopropyltrimethoxysilane (APTES) HNTs were used in this study. The effect of nanofiller concentration and functionalization with end-amino groups on the physicochemical properties of the prepared nanocomposites was studied. Mechanical properties were found enhanced after the incorporation of nanofillers, while the modification increased further the values of tensile and impact strength. Thermal stability of PCL was not affected by the presence of nanofillers, while the crystallization rate that was studied by Differential Scanning Calorimetry (DSC) and Polarized Light Optical Microscopy (POM) increased. All materials were subjected to enzymatic hydrolysis in phosphate buffer in the presence of lipases. Due to the hydrophilic nature of HNTs, the biodegradation rate of nanocomposites was higher compared to neat PCL. In order to confirm the effect of hydrophilicity, contact angle measurements were also performed. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. All scaffolds were tested in relevant cell culture using osteoblast-like cells (MG-63) to demonstrate their biocompatibility

Keywords: biomaterials, nanocomposites, scaffolds, tissue engineering

Procedia PDF Downloads 309
231 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 102
230 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 42
229 Study of Open Spaces in Urban Residential Clusters in India

Authors: Renuka G. Oka

Abstract:

From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.

Keywords: open spaces, physical and social determinants, residential clusters, use patterns

Procedia PDF Downloads 147
228 Enhancement of Fracture Toughness for Low-Temperature Applications in Mild Steel Weldments

Authors: Manjinder Singh, Jasvinder Singh

Abstract:

Existing theories of Titanic/Liberty ship, Sydney bridge accidents and practical experience generated an interest in developing weldments those has high toughness under sub-zero temperature conditions. The purpose was to protect the joint from undergoing DBT (Ductile to brittle transition), when ambient temperature reach sub-zero levels. Metallurgical improvement such as low carbonization or addition of deoxidization elements like Mn and Si was effective to prevent fracture in weldments (crack) at low temperature. In the present research, an attempt has been made to investigate the reason behind ductile to brittle transition of mild steel weldments when subjected to sub-zero temperatures and method of its mitigation. Nickel is added to weldments using manual metal arc welding (MMAW) preventing the DBT, but progressive reduction in charpy impact values as temperature is lowered. The variation in toughness with respect to nickel content being added to the weld pool is analyzed quantitatively to evaluate the rise in toughness value with increasing nickel amount. The impact performance of welded specimens was evaluated by Charpy V-notch impact tests at various temperatures (20 °C, 0 °C, -20 °C, -40 °C, -60 °C). Notch is made in the weldments, as notch sensitive failure is particularly likely to occur at zones of high stress concentration caused by a notch. Then the effect of nickel to weldments is investigated at various temperatures was studied by mechanical and metallurgical tests. It was noted that a large gain in impact toughness could be achieved by adding nickel content. The highest yield strength (462J) in combination with good impact toughness (over 220J at – 60 °C) was achieved with an alloying content of 16 wt. %nickel. Based on metallurgical behavior it was concluded that the weld metals solidify as austenite with increase in nickel. The microstructure was characterized using optical and high resolution SEM (scanning electron microscopy). At inter-dendritic regions mainly martensite was found. In dendrite core regions of the low carbon weld metals a mixture of upper bainite, lower bainite and a novel constituent coalesced bainite formed. Coalesced bainite was characterized by large bainitic ferrite grains with cementite precipitates and is believed to form when the bainite and martensite start temperatures are close to each other. Mechanical properties could be rationalized in terms of micro structural constituents as a function of nickel content.

Keywords: MMAW, Toughness, DBT, Notch, SEM, Coalesced bainite

Procedia PDF Downloads 523