Search results for: location based data
204 Bio-Hub Ecosystems: Expansion of Traditional Life Cycle Analysis Metrics to Include Zero-Waste Circularity Measures
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, a new set of metrics and measurement system is needed to better quantify the environmental, social and economic impacts of circular zero-waste design. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. Lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. In particular, the forestry-based plants which have been an invaluable outlet for woody biomass surplus, forest health improvement, timber production enhancement, and especially reduction of wildfire risk. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. It proposes not only models for integration of forestry, aquaculture, and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. Typically, life cycle analyses measure environmental impacts of different industrial production stages and are not integrated with indicators of material use circularity. This concept paper proposes the further development of a new set of metrics that would illustrate not only the typical life-cycle analysis (LCA), which shows the reduction in greenhouse gas (GHG) emissions, but also the zero-waste circularity measures of mass balance of the full value chain of the raw material and energy content/caloric value. These new measures quantify key impacts in making hyper-efficient use of natural resources and eliminating waste to landfills. The project utilized traditional LCA using the GREET model where the standalone biomass energy plant case was contrasted with the integration of a jet-fuel biorefinery. The methodology was then expanded to include combinations of co-hosts that optimize the life cycle of woody biomass from tree to energy, CO₂, heat and wood ash both from an energy/caloric value and for mass balance to include reuse of waste streams which are typically landfilled. The major findings of both a formal LCA study resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. If proven as a model, the expedited roll-out of these innovative scenarios can set a new standard for circular zero-waste projects that advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable bio-economy paradigm where waste streams become valuable inputs, supporting local and rural communities in simple, sustainable ways.Keywords: bio-economy, biomass energy, financing, metrics
Procedia PDF Downloads 155203 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 497202 Exploring the Neural Mechanisms of Communication and Cooperation in Children and Adults
Authors: Sara Mosteller, Larissa K. Samuelson, Sobanawartiny Wijeakumar, John P. Spencer
Abstract:
This study was designed to examine how humans are able to teach and learn semantic information as well as cooperate in order to jointly achieve sophisticated goals. Specifically, we are measuring individual differences in how these abilities develop from foundational building blocks in early childhood. The current study adopts a paradigm for novel noun learning developed by Samuelson, Smith, Perry, and Spencer (2011) to a hyperscanning paradigm [Cui, Bryant and Reiss, 2012]. This project measures coordinated brain activity between a parent and child using simultaneous functional near infrared spectroscopy (fNIRS) in pairs of 2.5, 3.5 and 4.5-year-old children and their parents. We are also separately testing pairs of adult friends. Children and parents, or adult friends, are seated across from one another at a table. The parent (in the developmental study) then teaches their child the names of novel toys. An experimenter then tests the child by presenting the objects in pairs and asking the child to retrieve one object by name. Children are asked to choose from both pairs of familiar objects and pairs of novel objects. In order to explore individual differences in cooperation with the same participants, each dyad plays a cooperative game of Jenga, in which their joint score is based on how many blocks they can remove from the tower as a team. A preliminary analysis of the noun-learning task showed that, when presented with 6 word-object mappings, children learned an average of 3 new words (50%) and that the number of objects learned by each child ranged from 2-4. Adults initially learned all of the new words but were variable in their later retention of the mappings, which ranged from 50-100%. We are currently examining differences in cooperative behavior during the Jenga playing game, including time spent discussing each move before it is made. Ongoing analyses are examining the social dynamics that might underlie the differences between words that were successfully learned and unlearned words for each dyad, as well as the developmental differences observed in the study. Additionally, the Jenga game is being used to better understand individual and developmental differences in social coordination during a cooperative task. At a behavioral level, the analysis maps periods of joint visual attention between participants during the word learning and the Jenga game, using head-mounted eye trackers to assess each participant’s first-person viewpoint during the session. We are also analyzing the coherence in brain activity between participants during novel word-learning and Jenga playing. The first hypothesis is that visual joint attention during the session will be positively correlated with both the number of words learned and with the number of blocks moved during Jenga before the tower falls. The next hypothesis is that successful communication of new words and success in the game will each be positively correlated with synchronized brain activity between the parent and child/the adult friends in cortical regions underlying social cognition, semantic processing, and visual processing. This study probes both the neural and behavioral mechanisms of learning and cooperation in a naturalistic, interactive and developmental context.Keywords: communication, cooperation, development, interaction, neuroscience
Procedia PDF Downloads 251201 Ragging and Sludging Measurement in Membrane Bioreactors
Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd
Abstract:
Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.Keywords: clogging, membrane bioreactors, ragging, sludge
Procedia PDF Downloads 177200 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 133199 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection
Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten
Abstract:
Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection
Procedia PDF Downloads 334198 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base
Authors: Joanna Wielochowska, Aneta Wielochowska
Abstract:
Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia
Procedia PDF Downloads 123197 Tip-Enhanced Raman Spectroscopy with Plasmonic Lens Focused Longitudinal Electric Field Excitation
Authors: Mingqian Zhang
Abstract:
Tip-enhanced Raman spectroscopy (TERS) is a scanning probe technique for individual objects and structured surfaces investigation that provides a wealth of enhanced spectral information with nanoscale spatial resolution and high detection sensitivity. It has become a powerful and promising chemical and physical information detection method in the nanometer scale. The TERS technique uses a sharp metallic tip regulated in the near-field of a sample surface, which is illuminated with a certain incident beam meeting the excitation conditions of the wave-vector matching. The local electric field, and, consequently, the Raman scattering, from the sample in the vicinity of the tip apex are both greatly tip-enhanced owning to the excitation of localized surface plasmons and the lightning-rod effect. Typically, a TERS setup is composed of a scanning probe microscope, excitation and collection optical configurations, and a Raman spectroscope. In the illumination configuration, an objective lens or a parabolic mirror is always used as the most important component, in order to focus the incident beam on the tip apex for excitation. In this research, a novel TERS setup was built up by introducing a plasmonic lens to the excitation optics as a focusing device. A plasmonic lens with symmetry breaking semi-annular slits corrugated on gold film was designed for the purpose of generating concentrated sub-wavelength light spots with strong longitudinal electric field. Compared to conventional far-field optical components, the designed plasmonic lens not only focuses an incident beam to a sub-wavelength light spot, but also realizes a strong z-component that dominants the electric field illumination, which is ideal for the excitation of tip-enhancement. Therefore, using a PL in the illumination configuration of TERS contributes to improve the detection sensitivity by both reducing the far-field background and effectively exciting the localized electric field enhancement. The FDTD method was employed to investigate the optical near-field distribution resulting from the light-nanostructure interaction. And the optical field distribution was characterized using an scattering-type scanning near-field optical microscope to demonstrate the focusing performance of the lens. The experimental result is in agreement with the theoretically calculated one. It verifies the focusing performance of the plasmonic lens. The optical field distribution shows a bright elliptic spot in the lens center and several arc-like side-lobes on both sides. After the focusing performance was experimentally verified, the designed plasmonic lens was used as a focusing component in the excitation configuration of TERS setup to concentrate incident energy and generate a longitudinal optical field. A collimated linearly polarized laser beam, with along x-axis polarization, was incident from the bottom glass side on the plasmonic lens. The incident light focused by the plasmonic lens interacted with the silver-coated tip apex and enhanced the Raman signal of the sample locally. The scattered Raman signal was gathered by a parabolic mirror and detected with a Raman spectroscopy. Then, the plasmonic lens based setup was employed to investigate carbon nanotubes and TERS experiment was performed. Experimental results indicate that the Raman signal is considerably enhanced which proves that the novel TERS configuration is feasible and promising.Keywords: longitudinal electric field, plasmonics, raman spectroscopy, tip-enhancement
Procedia PDF Downloads 372196 Modern Hybrid of Older Black Female Stereotypes in Hollywood Film
Authors: Frederick W. Gooding, Jr., Mark Beeman
Abstract:
Nearly a century ago, the groundbreaking 1915 film ‘The Birth of a Nation’ popularized the way Hollywood made movies with its avant-garde, feature-length style. The movie's subjugating and demeaning depictions of African American women (and men) reflected popular racist beliefs held during the time of slavery and the early Jim Crow era. Although much has changed concerning race relations in the past century, American sociologist Patricia Hill Collins theorizes that the disparaging images of African American women originating in the era of plantation slavery are adaptable and endure as controlling images today. In this context, a comparative analysis of the successful contemporary film, ‘Bringing Down the House’ starring Queen Latifah is relevant as this 2004 film was designed to purposely defy and ridicule classic stereotypes of African American women. However, the film is still tied to the controlling images from the past, although in a modern hybrid form. Scholars of race and film have noted that the pervasive filmic imagery of the African American woman as the loyal mammy stereotype faded from the screen in the post-civil rights era in favor of more sexualized characters (i.e., the Jezebel trope). Analyzing scenes and dialogue through the lens of sociological and critical race theory, the troubling persistence of African American controlling images in film stubbornly emerge in a movie like ‘Bringing Down the House.’ Thus, these controlling images, like racism itself, can adapt to new social and economic conditions. Although the classic controlling images appeared in the first feature length film focusing on race relations a century ago, ‘The Birth of a Nation,’ this black and white rendition of the mammy figure was later updated in 1939 with the classic hit, ‘Gone with the Wind’ in living color. These popular controlling images have loomed quite large in the minds of international audiences, as ‘Gone with the Wind’ is still shown in American theaters currently, and experts at the British Film Institute in 2004 rated ‘Gone with the Wind’ as the number one movie of all time in UK movie history based upon the total number of actual viewings. Critical analysis of character patterns demonstrate that images that appear superficially benign contribute to a broader and quite persistent pattern of marginalization within the aggregate. This approach allows experts and viewers alike to detect more subtle and sophisticated strands of racial discrimination that are ‘hidden in plain sight’ despite numerous changes in the Hollywood industry that appear to be more voluminous and diverse than three or four decades ago. In contrast to white characters, non-white or minority characters are likely to be subtly compromised or marginalized relative to white characters if and when seen within mainstream movies, rather than be subjected to obvious and offensive racist tropes. The hybrid form of both the older Jezebel and Mammy stereotypes exhibited by lead actress Queen Latifah in ‘Bringing Down the House’ represents a more suave and sophisticated merging of past imagery ideas deemed problematic in the past as well as the present.Keywords: African Americans, Hollywood film, hybrid, stereotypes
Procedia PDF Downloads 177195 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 501194 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure
Authors: Sara Saboonian, Pierre Filion
Abstract:
The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.Keywords: ecosystem services, green infrastructure, intensification, planning
Procedia PDF Downloads 355193 Nanocomposite Effect Based on Silver Nanoparticles and Anemposis Californica Extract as Skin Restorer
Authors: Maria Zulema Morquecho Vega, Fabiola CarolinaMiranda Castro, Rafael Verdugo Miranda, Ignacio Yocupicio Villegas, Ana lidia Barron Raygoza, Martin enrique MArquez Cordova, Jose Alberto Duarte Moller
Abstract:
Background: Anemopsis californica, also called (tame grass) belongs to the Saururaceae family small, green plant. The blade is long and wide. Gives a white flower. The plant population is only found in humid, swampy habitats, it grows where there is water, along the banks of streams and water holes. In the winter, it dries up. The leaves, rhizomes, or roots of this plant have been used to treat a range of diseases. Some of its healing properties are used to treat wounds, cold and flu symptoms, spasmodic cough, infection, pain and inflammation, burns, swollen feet, as well as lung ailments, asthma, circulatory problems (varicose veins), rheumatoid arthritis, purifies blood, helps in urinary and digestive tract diseases, sores and healing, for headache, sore throat, diarrhea, kidney pain. The tea made from the leaves and roots is used to treat uterine cancer, womb cancer, relieves menstrual pain and stops excessive bleeding after childbirth. It is also used as a gynecological treatment for infections, hemorrhoids, candidiasis and vaginitis. Objective: To study the cytotoxicity of gels prepared with silver nanoparticles in AC extract combined with chitosan, collagen and hyaluronic acid as an alternative therapy for skin conditions. Methods: The Ag NPs were synthesized according to the following method. A 0.3 mg/mL solution is prepared in 10 ml of deionized water, adjust to pH 12 with NaOH, stirring is maintained constant magnetic and a temperature of 80 °C. Subsequently, 100 ul of a 0.1 M AgNO3 solution and kept stirring constantly for 15 min. Once the reaction is complete, measurements are performed by UV-Vis. A gel was prepared in a 5% solution of acetic acid with the respective nanoparticles and AC extract of silver in the extract of AC. Chitosan is added until the process begins to occur gel. At that time, collagen will be added in a ratio of 3 to 5 drops, and later, hyaluronic acid in 2% of the total compound formed. Finally, after resting for 24 hours, the cytotoxic effect of the gels was studied. in the presence of highly positive bacteria Staphylococcus aureus and highly negative for Escherichia coli. Cultures will be incubated for 24 hours in the presence of the compound and compared with the reference. Results: Silver nanoparticles obtained had a spherical shape and sizes among 20 and 30 nm. UV-Vis spectra confirm the presence of silver nanoparticles showing a surface plasmon around 420 nm. Finally, the test in presence of bacteria yield a good antibacterial property of this nanocompound and tests in people were successful. Conclusion: Gel prepared by biogenic synthesis shown beneficious effects in severe acne, acne vulgaris and wound healing with diabetic patients.Keywords: anemopsis californica, nanomedicina, biotechnology, biomedicine
Procedia PDF Downloads 114192 Impact of School Environment on Socio-Affective Development: A Quasi-Experimental Longitudinal Study of Urban and Suburban Gifted and Talented Programs
Authors: Rebekah Granger Ellis, Richard B. Speaker, Pat Austin
Abstract:
This study used two psychological scales to examine the level of social and emotional intelligence and moral judgment of over 500 gifted and talented high school students in various academic and creative arts programs in a large metropolitan area in the southeastern United States. For decades, numerous models and programs purporting to encourage socio-affective characteristics of adolescent development have been explored in curriculum theory and design. Socio-affective merges social, emotional, and moral domains. It encompasses interpersonal relations and social behaviors; development and regulation of emotions; personal and gender identity construction; empathy development; moral development, thinking, and judgment. Examining development in these socio-affective domains can provide insight into why some gifted and talented adolescents are not successful in adulthood despite advanced IQ scores. Particularly whether nonintellectual characteristics of gifted and talented individuals, such as emotional, social and moral capabilities, are as advanced as their intellectual abilities and how these are related to each other. Unique characteristics distinguish gifted and talented individuals; these may appear as strengths, but there is the potential for problems to accompany them. Although many thrive in their school environments, some gifted students struggle rather than flourish. In the socio-affective domain, these adolescents face special intrapersonal, interpersonal, and environmental problems. Gifted individuals’ cognitive, psychological, and emotional development occurs asynchronously, in multidimensional layers at different rates and unevenly across ability levels. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of gifted and talented adolescents. This quasi-experimental longitudinal study examined students in several gifted and talented education programs (creative arts school, urban charter schools, and suburban public schools) for (1) socio-affective development level and (2) whether a particular gifted and talented program encourages developmental growth. The following research questions guided the study: (1) How do academically and artistically talented gifted 10th and 11th grade students perform on psychometric scales of social and emotional intelligence and moral judgment? Do they differ from their age or grade normative sample? Are their gender differences among gifted students? (2) Does school environment impact 10th and 11th grade gifted and talented students’ socio-affective development? Do gifted adolescents who participate in a particular school gifted program differ in their developmental profiles of social and emotional intelligence and moral judgment? Students’ performances on psychometric instruments were compared over time and by type of program. Participants took pre-, mid-, and post-tests over the course of an academic school year with Defining Issues Test (DIT-2) assessing moral judgment and BarOn EQ-I: YV assessing social and emotional intelligence. Based on these assessments, quantitative differences in growth on psychological scales (individual and school) were examined. Change scores between schools were also compared. If a school showed change, artifacts (culture, curricula, instructional methodology) provided insight as to environmental qualities that produced this difference.Keywords: gifted and talented education, moral development, socio-affective development, socio-affective education
Procedia PDF Downloads 162191 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 122190 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game
Authors: Steven W. Carruthers
Abstract:
The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating
Procedia PDF Downloads 191189 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English
Authors: Noury Bakrim
Abstract:
When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization
Procedia PDF Downloads 142188 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components
Authors: Francesca Gullo, Paola Palmero, Massimo Messori
Abstract:
Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites
Procedia PDF Downloads 53187 Formulation of a Submicron Delivery System including a Platelet Lysate to Be Administered in Damaged Skin
Authors: Sergio A. Bernal-Chavez, Sergio Alcalá-Alcalá, Doris A. Cerecedo-Mercado, Adriana Ganem-Rondero
Abstract:
The prevalence of people with chronic wounds has increased dramatically by many factors including smoking, obesity and chronic diseases, such as diabetes, that can slow the healing process and increase the risk of becoming chronic. Because of this situation, the improvement of chronic wound treatments is a necessity, which has led to the scientific community to focus on improving the effectiveness of current therapies and the development of new treatments. The wound formation is a physiological complex process, which is characterized by an inflammatory stage with the presence of proinflammatory cells that create a proteolytic microenvironment during the healing process, which includes the degradation of important growth factors and cytokines. This decrease of growth factors and cytokines provides an interesting strategy for wound healing if they are administered externally. The use of nanometric drug delivery systems, such as polymer nanoparticles (NP), also offers an interesting alternative around dermal systems. An interesting strategy would be to propose a formulation based on a thermosensitive hydrogel loaded with polymeric nanoparticles that allows the inclusion and application of a platelet lysate (PL) on damaged skin, with the aim of promoting wound healing. In this work, NP were prepared by a double emulsion-solvent evaporation technique, using polylactic-co-glycolic acid (PLGA) as biodegradable polymer. Firstly, an aqueous solution of PL was emulsified into a PLGA organic solution, previously prepared in dichloromethane (DCM). Then, this disperse system (W/O) was poured into a polyvinyl alcohol (PVA) solution to get the double emulsion (W/O/W), finally the DCM was evaporated by magnetic stirring resulting in the NP formation containing PL. Once the NP were obtained, these systems were characterized by morphology, particle size, Z-potential, encapsulation efficiency (%EE), physical stability, infrared spectrum, calorimetric studies (DSC) and in vitro release profile. The optimized nanoparticles were included in a thermosensitive gel formulation of Pluronic® F-127. The gel was prepared by the cold method at 4 °C and 20% of polymer concentration. Viscosity, sol-gel phase transition, time of no flow solid-gel at wound temperature, changes in particle size by temperature-effect using dynamic light scattering (DLS), occlusive effect, gel degradation, infrared spectrum and micellar point by DSC were evaluated in all gel formulations. PLGA NP of 267 ± 10.5 nm and Z-potential of -29.1 ± 1 mV were obtained. TEM micrographs verified the size of NP and evidenced their spherical shape. The %EE for the system was around 99%. Thermograms and in infrared spectra mark the presence of PL in NP. The systems did not show significant changes in the parameters mentioned above, during the stability studies. Regarding the gel formulation, the transition sol-gel occurred at 28 °C with a time of no flow solid-gel of 7 min at 33°C (common wound temperature). Calorimetric, DLS and infrared studies corroborated the physical properties of a thermosensitive gel, such as the micellar point. In conclusion, the thermosensitive gel described in this work, contains therapeutic amounts of PL and fulfills the technological properties to be used in damaged skin, with potential application in wound healing and tissue regeneration.Keywords: growth factors, polymeric nanoparticles, thermosensitive hydrogels, tissue regeneration
Procedia PDF Downloads 170186 Nanoscale Photo-Orientation of Azo-Dyes in Glassy Environments Using Polarized Optical Near-Field
Authors: S. S. Kharintsev, E. A. Chernykh, S. K. Saikin, A. I. Fishman, S. G. Kazarian
Abstract:
Recent advances in improving information storage performance are inseparably linked with circumvention of fundamental constraints such as the supermagnetic limit in heat assisted magnetic recording, charge loss tolerance in solid-state memory and the Abbe’s diffraction limit in optical storage. A substantial breakthrough in the development of nonvolatile storage devices with dimensional scaling has been achieved due to phase-change chalcogenide memory, which nowadays, meets the market needs to the greatest advantage. A further progress is aimed at the development of versatile nonvolatile high-speed memory combining potentials of random access memory and archive storage. The well-established properties of light at the nanoscale empower us to use them for recording optical information with ultrahigh density scaled down to a single molecule, which is the size of a pit. Indeed, diffraction-limited optics is able to record as much information as ~1 Gb/in2. Nonlinear optical effects, for example, two-photon fluorescence recording, allows one to decrease the extent of the pit even more, which results in the recording density up to ~100 Gb/in2. Going beyond the diffraction limit, due to the sub-wavelength confinement of light, pushes the pit size down to a single chromophore, which is, on average, of ~1 nm in length. Thus, the memory capacity can be increased up to the theoretical limit of 1 Pb/in2. Moreover, the field confinement provides faster recording and readout operations due to the enhanced light-matter interaction. This, in turn, leads to the miniaturization of optical devices and the decrease of energy supply down to ~1 μW/cm². Intrinsic features of light such as multimode, mixed polarization and angular momentum in addition to the underlying optical and holographic tools for writing/reading, enriches the storage and encryption of optical information. In particular, the finite extent of the near-field penetration, falling into a range of 50-100 nm, gives the possibility to perform 3D volume (layer-to-layer) recording/readout of optical information. In this study, we demonstrate a comprehensive evidence of isotropic-to-homeotropic phase transition of the azobenzene-functionalized polymer thin film exposed to light and dc electric field using near-field optical microscopy and scanning capacitance microscopy. We unravel a near-field Raman dichroism of a sub-10 nm thick epoxy-based side-chain azo-polymer films with polarization-controlled tip-enhanced Raman scattering. In our study, orientation of azo-chromophores is controlled with a bias voltage gold tip rather than light polarization. Isotropic in-plane and homeotropic out-of-plane arrangement of azo-chromophores in glassy environment can be distinguished with transverse and longitudinal optical near-fields. We demonstrate that both phases are unambiguously visualized by 2D mapping their local dielectric properties with scanning capacity microscopy. The stability of the polar homeotropic phase is strongly sensitive to the thickness of the thin film. We make an analysis of α-transition of the azo-polymer by detecting a temperature-dependent phase jump of an AFM cantilever when passing through the glass temperature. Overall, we anticipate further improvements in optical storage performance, which approaches to a single molecule level.Keywords: optical memory, azo-dye, near-field, tip-enhanced Raman scattering
Procedia PDF Downloads 176185 Official Seals on the Russian-Qing Treaties: Material Manifestations and Visual Enunciations
Authors: Ning Chia
Abstract:
Each of the three different language texts (Manchu, Russian, and Latin) of the 1689 Treaty of Nerchinsk bore official seals from Imperial Russia and Qing China. These seals have received no academic attention, yet they can reveal a site of a layered and shared material, cultural, political, and diplomatic world of the time in Eastern Eurasia. The very different seal selections from both empires while ratifying the Treaty of Beijing in 1860 have obtained no scholarly advertency either; they can also explicate a tremendously changed relationship with visual and material manifestation. Exploring primary sources in Manchu, Russian, and Chinese languages as well as the images of the visual seals, this study investigates the reasons and purposes of utilizing official seals for the treaty agreement. A refreshed understanding of Russian-Qing diplomacy will be developed by pursuing the following aspects: (i) Analyzing the iconographic meanings of each seal insignia and unearthing a competitive, yet symbols-delivered and seal-generated, 'dialogue' between the two empires (ii) Contextualizing treaty seals within the historical seal cultures, and discovering how domestic seal system in each empire’s political institution developed into treaty-defined bilateral relations (iii) Expounding the seal confiding in each empire’s daily governing routines, and annotating the trust in the seal as a quested promise from the opponent negotiator to fulfill the treaty terms (iv) Contrasting the two seal traditions along two civilization-lines, Eastern vs. Western, and dissecting how the two styles of seal emblems affected the cross-cultural understanding or misunderstanding between the two empires (v) Comprehending the history-making events from the substantial resources such as the treaty seals, and grasping why the seals for the two treaties, so different in both visual design and symbolic value, were chosen in the two relationship eras (vi) Correlating the materialized seal 'expression' and the imperial worldviews based on each empire’s national/or power identity, and probing the seal-represented 'rule under the Heaven' assumption of China and Russian rising role in 'European-American imperialism … centered on East Asia' (Victor Shmagin, 2020). In conclusion, the impact of official seals on diplomatic treaties needs profound knowledge in seal history, insignia culture, and emblem belief to be able to comprehend. The official seals in both Imperial Russia and Qing China belonged to a particular statecraft art in a specific material and visual form. Once utilized in diplomatic treaties, the meticulously decorated and politically institutionalized seals were transformed from the determinant means for domestic administration and social control into the markers of an empire’s sovereign authority. Overlooked in historical practice, the insignia seal created a wire of 'visual contest' between the two rival powers. Through this material lens, the scholarly knowledge of the Russian-Qing diplomatic relationship will be significantly upgraded. Connecting Russian studies, Qing/Chinese studies, and Eurasian studies, this study also ties material culture, political culture, and diplomatic culture together. It promotes the study of official seals and emblem symbols in worldwide diplomatic history.Keywords: Russia-Qing diplomatic relation, Treaty of Beijing (1860), Treaty of Nerchinsk (1689), Treaty seals
Procedia PDF Downloads 206184 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 143183 A Case for Strategic Landscape Infrastructure: South Essex Estuary Park
Authors: Alexandra Steed
Abstract:
Alexandra Steed URBAN was commissioned to undertake the South Essex Green and Blue Infrastructure Study (SEGBI) on behalf of the Association of South Essex Local Authorities (ASELA): a partnership of seven neighboring councils within the Thames Estuary. Located on London’s doorstep, the 70,000-hectare region is under extraordinary pressure for regeneration, further development, and economic expansion, yet faces extreme challenges: sea-level rise and inadequate flood defenses, stormwater flooding and threatened infrastructure, loss of internationally important habitats, significant existing community deprivation, and lack of connectivity and access to green space. The brief was to embrace these challenges in the creation of a document that would form a key part of ASELA’s Joint Strategic Framework and feed into local plans and master plans. Thus, helping to tackle climate change, ecological collapse, and social inequity at a regional scale whilst creating a relationship and awareness between urban communities and the surrounding landscapes and nature. The SEGBI project applied a ‘land-based’ methodology, combined with a co-design approach involving numerous stakeholders, to explore how living infrastructure can address these significant issues, reshape future planning and development, and create thriving places for the whole community of life. It comprised three key stages, including Baseline Review; Green and Blue Infrastructure Assessment; and the final Green and Blue Infrastructure Report. The resulting proposals frame an ambitious vision for the delivery of a new regional South Essex Estuary (SEE) Park – 24,000 hectares of protected and connected landscapes. This unified parkland system will drive effective place-shaping and “leveling up” for the most deprived communities while providing large-scale nature recovery and biodiversity net gain. Comprehensive analysis and policy recommendations ensure best practices will be embedded within planning documents and decisions guiding future development. Furthermore, a Natural Capital Account was undertaken as part of the strategy showing the tremendous economic value of the natural assets. This strategy sets a pioneering precedent that demonstrates how the prioritisation of living infrastructure has the capacity to address climate change and ecological collapse, while also supporting sustainable housing, healthier communities, and resilient infrastructures. It was only achievable through a collaborative and cross-boundary approach to strategic planning and growth, with a shared vision of place, and a strong commitment to delivery. With joined-up thinking and a joined-up region, a more impactful plan for South Essex was developed that will lead to numerous environmental, social, and economic benefits across the region, and enhancing the landscape and natural environs on the periphery of one of the largest cities in the world.Keywords: climate change, green and blue infrastructure, landscape architecture, master planning, regional planning, social equity
Procedia PDF Downloads 95182 Synthesis and Luminescent Properties of Barium-Europium (III) Silicate Systems
Authors: A. Isahakyan, A. Terzyan, V. Stepanyan, N. Zulumyan, H. Beglaryan
Abstract:
The involvement of silica hydrogel derived from serpentine minerals (Mg(Fe))6[Si4O10](OH)8 as a source of silicon dioxide in SiO2–NaOH–BaCl2–H2O system results in precipitating via one-hour stirring of boiling suspension such intermediates that on heating up to 800 °C crystallize into the product composed of barium ortho- Ba2SiO4 and metasilicates BaSiO3. Based on the positive results, this approach has been decided to be adapted to inserting europium (III) ions into the structure of the synthesized compounds. Intermediates previously precipitated in silica hydrogel–NaOH–BaCl2–Eu(NO3)3 system via one-hour stirring at room temperature underwent one-hour heat-treatment at different temperatures (6001200 °C). Prior to calcination, the suspension produced in the mixer was heated on a boiling-water bath until a powder-like sample was obtained. When the silica hydrogel was metered, SiO2 content in the silica hydrogel that is 5.8 % was taken into consideration in order to guaranty the molar ratios of both SiO2 to BaO and SiO2 to Na2O equal to 1:2. BaCl2 and Eu(NO3)3 reagents were weighted so that the formation of appropriate compositions was guaranteed. Samples including various concentrations of Eu3+ ions (1.25, 2.5, 3.75, 5, 6.35, 8.65, 10, 17.5, 18.75 and 20 mol%) were synthesized by the described method. Luminescence excitation, emission spectra of the products were recorded on the Agilent Cary Eclipes fluorescence spectrophotometer using Agilent Xenon flash lamp (80 Hz) as the excitation source (scanning rate=30 nm/min, excitation and emission slits width=5 nm, excitation filter set to auto, emission filter set to auto and PMT detector Voltage=800 V). Prior to optical properties measurements, each of the powder samples was put in the solid sample-holder. X-ray powder diffraction (XRPD) measurements were made on the SmartLab SE diffractometer. Emission spectra recorded for all the samples at an excitation wavelength of 394 nm exhibit peaks centered at around 536, 555, 587, 614, 653, 690 and 702.5 nm. The most intensive emission peak is observed at 614nm due to 5D0→7F2 of europium (III) ions transition. Luminescence intensity achieves its maximum for Eu3+ 17.5 mol% and heat-treatment at 1200 °C. The XRPD patterns revealed that the diffraction peaks recorded for this sample are identical to NaBa6Nd(SiO4)4 reflections. As Nd-containing reagents were not involved into the synthesis, the maximum luminescent intensity is most likely to be conditioned by NaBa6Eu(SiO4)4 formation whose reflections are not available in the ICDD-JCPDS database of crystallographic 2024. Up to Eu3+ 2.5 mol% the samples demonstrate the phases corresponding to Ba2SiO4 and BaSiO3 standards. Subsequent increasing of europium (III) concentration in the system leads to NaBa6Eu(SiO4)4 formation along with Ba2SiO4 and BaSiO3. NaBa6Eu(SiO4)4 share gradually increases and starting from 17.5 mol% and more NaBa6Eu(SiO4)4 phase is only registered. Thus, the variation of europium (III) concentration in silica hydrogel–NaOH–BaCl2–Eu(NO3)3 system allows producing by the precipitation method the products composed of europium (III)-doped Ba2SiO4 and BaSiO3 and/or NaBa6Eu(SiO4)4 distinguished by different luminescent properties. The work was supported by the Science Committee of RA, in the frames of the research projects № 21T-1D131 and № 21SCG-1D013.Keywords: europium (III)-doped barium ortho- Ba2SiO4 and metasilicates BaSiO₃, NaBa₆Eu(SiO₄)₄, luminescence, precipitation method
Procedia PDF Downloads 38181 Analysis of Potential Associations of Single Nucleotide Polymorphisms in Patients with Schizophrenia Spectrum Disorders
Authors: Tatiana Butkova, Nikolai Kibrik, Kristina Malsagova, Alexander Izotov, Alexander Stepanov, Anna Kaysheva
Abstract:
Relevance. The genetic risk of developing schizophrenia is determined by two factors: single nucleotide polymorphisms and gene copy number variations. The search for serological markers for early diagnosis of schizophrenia is driven by the fact that the first five years of the disease are accompanied by significant biological, psychological, and social changes. It is during this period that pathological processes are most amenable to correction. The aim of this study was to analyze single nucleotide polymorphisms (SNPs) that are hypothesized to potentially influence the onset and development of the endogenous process. Materials and Methods It was analyzed 73 single nucleotide polymorphism variants. The study included 48 patients undergoing inpatient treatment at "Psychiatric Clinical Hospital No. 1" in Moscow, comprising 23 females and 25 males. Inclusion criteria: - Patients aged 18 and above. - Diagnosis according to ICD-10: F20.0, F20.2, F20.8, F21.8, F25.1, F25.2. - Voluntary informed consent from patients. Exclusion criteria included: - The presence of concurrent somatic or neurological pathology, neuroinfections, epilepsy, organic central nervous system damage of any etiology, and regular use of medication. - Substance abuse and alcohol dependence. - Women who were pregnant or breastfeeding. Clinical and psychopathological assessment was complemented by psychometric evaluation using the PANSS scale at the beginning and end of treatment. The duration of observation during therapy was 4-6 weeks. Total DNA extraction was performed using QIAamp DNA. Blood samples were processed on Illumina HiScan and genotyped for 652,297 markers on the Infinium Global Chips Screening Array-24v2.0 using the IMPUTE2 program with parameters Ne=20,000 and k=90. Additional filtration was performed based on INFO>0.5 and genotype probability>0.5. Quality control of the obtained DNA was conducted using agarose gel electrophoresis, with each tested sample having a volume of 100 µL. Results. It was observed that several SNPs exhibited gender dependence. We identified groups of single nucleotide polymorphisms with a membership of 80% or more in either the female or male gender. These SNPs included rs2661319, rs2842030, rs4606, rs11868035, rs518147, rs5993883, and rs6269.Another noteworthy finding was the limited combination of SNPs sufficient to manifest clinical symptoms leading to hospitalization. Among all 48 patients, each of whom was analyzed for deviations in 73 SNPs, it was discovered that the combination of involved SNPs in the manifestation of pronounced clinical symptoms of schizophrenia was 19±3 out of 73 possible. In study, the frequency of occurrence of single nucleotide polymorphisms also varied. The most frequently observed SNPs were rs4849127 (in 90% of cases), rs1150226 (86%), rs1414334 (75%), rs10170310 (73%), rs2857657, and rs4436578 (71%). Conclusion. Thus, the results of this study provide additional evidence that these genes may be associated with the development of schizophrenia spectrum disorders. However, it's impossible cannot rule out the hypothesis that these polymorphisms may be in linkage disequilibrium with other functionally significant polymorphisms that may actually be involved in schizophrenia spectrum disorders. It has been shown that missense SNPs by themselves are likely not causative of the disease but are in strong linkage disequilibrium with non-functional SNPs that may indeed contribute to disease predisposition.Keywords: gene polymorphisms, genotyping, single nucleotide polymorphisms, schizophrenia.
Procedia PDF Downloads 78180 Challenges and Proposals for Public Policies Aimed At Increasing Energy Efficiency in Low-Income Communities in Brazil: A Multi-Criteria Approach
Authors: Anna Carolina De Paula Sermarini, Rodrigo Flora Calili
Abstract:
Energy Efficiency (EE) needs investments, new technologies, greater awareness and management on the side of citizens and organizations, and more planning. However, this issue is usually remembered and discussed only in moments of energy crises, and opportunities are missed to take better advantage of the potential of EE in the various sectors of the economy. In addition, there is little concern about the subject among the less favored classes, especially in low-income communities. Accordingly, this article presents suggestions for public policies that aim to increase EE for low-income housing and communities based on international and national experiences. After reviewing the literature, eight policies were listed, and to evaluate them; a multicriteria decision model was developed using the AHP (Analytical Hierarchy Process) and TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) methods, combined with fuzzy logic. Nine experts analyzed the policies according to 9 criteria: economic impact, social impact, environmental impact, previous experience, the difficulty of implementation, possibility/ease of monitoring and evaluating the policies, expected impact, political risks, and public governance and sustainability of the sector. The results found in order of preference are (i) Incentive program for equipment replacement; (ii) Community awareness program; (iii) EE Program with a greater focus on low income; (iv) Staggered and compulsory certification of social interest buildings; (v) Programs for the expansion of smart metering, energy monitoring and digitalization; (vi) Financing program for construction and retrofitting of houses with the emphasis on EE; (vii) Income tax deduction for investment in EE projects in low-income households made by companies; (viii) White certificates of energy for low-income. First, the policy of equipment substitution has been employed in Brazil and the world and has proven effective in promoting EE. For implementation, efforts are needed from the federal and state governments, which can encourage companies to reduce prices, and provide some type of aid for the purchase of such equipment. In second place is the community awareness program, promoting socio-educational actions on EE concepts and with energy conservation tips. This policy is simple to implement and has already been used by many distribution utilities in Brazil. It can be carried out through bids defined by the government in specific areas, being executed by third sector companies with public and private resources. Third on the list is the proposal to continue the Energy Efficiency Program (which obliges electric energy companies to allocate resources for research in the area) by suggesting the return of the mandatory investment of 60% of the resources in projects for low income. It is also relatively simple to implement, requiring efforts by the federal government to make it mandatory, and on the part of the distributors, compliance is needed. The success of the suggestions depends on changes in the established rules and efforts from the interested parties. For future work, we suggest the development of pilot projects in low-income communities in Brazil and the application of other multicriteria decision support methods to compare the results obtained in this study.Keywords: energy efficiency, low-income community, public policy, multicriteria decision making
Procedia PDF Downloads 116179 Coil-Over Shock Absorbers Compared to Inherent Material Damping
Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major
Abstract:
Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.Keywords: damper structures, material damping, PDMS, TPU
Procedia PDF Downloads 113178 The 5-HT1A Receptor Biased Agonists, NLX-101 and NLX-204, Elicit Rapid-Acting Antidepressant Activity in Rat Similar to Ketamine and via GABAergic Mechanisms
Authors: A. Newman-Tancredi, R. Depoortère, P. Gruca, E. Litwa, M. Lason, M. Papp
Abstract:
The N-methyl-D-aspartic acid (NMDA) receptor antagonist, ketamine, can elicit rapid-acting antidepressant (RAAD) effects in treatment-resistant patients, but it requires parenteral co-administration with a classical antidepressant under medical supervision. In addition, ketamine can also produce serious side effects that limit its long-term use, and there is much interest in identifying RAADs based on ketamine’s mechanism of action but with safer profiles. Ketamine elicits GABAergic interneuron inhibition, glutamatergic neuron stimulation, and, notably, activation of serotonin 5-HT1A receptors in the prefrontal cortex (PFC). Direct activation of the latter receptor subpopulation with selective ‘biased agonists’ may therefore be a promising strategy to identify novel RAADs and, consistent with this hypothesis, the prototypical cortical biased agonist, NLX-101, exhibited robust RAAD-like activity in the chronic mild stress model of depression (CMS). The present study compared the effects of a novel, selective 5-HT1A receptor-biased agonist, NLX-204, with those of ketamine and NLX-101. Materials and methods: CMS procedure was conducted on Wistar rats; drugs were administered either intraperitoneally (i.p.) or by bilateral intracortical microinjection. Ketamine: 10 mg/kg i.p. or 10 µg/side in PFC; NLX-204 and NLX-101: 0.08 and 0.16 mg/kg i.p. or 16 µg/side in PFC. In addition, interaction studies were carried out with systemic NLX-204 or NLX-101 (each at 0.16 mg/kg i.p.) in combination with intracortical WAY-100635 (selective 5-HT1A receptor antagonist; 2 µg/side) or muscimol (GABA-A receptor agonist, 12.5 ng/side). Anhedonia was assessed by CMS-induced decrease in sucrose solution consumption; anxiety-like behavior was assessed using the Elevated Plus Maze (EPM), and cognitive impairment was assessed by the Novel Object Recognition (NOR) test. Results: A single administration of NLX-204 was sufficient to reverse the CMS-induced deficit in sucrose consumption, similarly to ketamine and NLX-101. NLX-204 also reduced CMS-induced anxiety in the EPM and abolished CMS-induced NOR deficits. These effects were maintained (EPM and NOR) or enhanced (sucrose consumption) over a subsequent 2-week period of treatment. The anti-anhedonic response of the drugs was also maintained for several weeks Following treatment discontinuation, suggesting that they had sustained effects on neuronal networks. A single PFC administration of NLX-204 reversed deficient sucrose consumption, similarly to ketamine and NLX-101. Moreover, the anti-anhedonic activities of systemic NLX-204 and NLX 101 were abolished by coadministration with intracortical WAY-100635 or muscimol. Conclusions: (i) The antidepressant-like activity of NLX-204 in the rat CMS model was as rapid as that of ketamine or NLX-101, supporting targeting cortical 5-HT1A receptors with selective, biased agonists to achieve RAAD effects. (ii)The anti-anhedonic activity of systemic NLX-204 was mimicked by local administration of the compound in the PFC, confirming the involvement of cortical circuits in its RAAD-like effects. (iii) Notably, the effects of systemic NLX-204 and NLX-101 were abolished by PFC administration of muscimol, indicating that they act by (indirectly) eliciting a reduction in cortical GABAergic neurotransmission. This is consistent with ketamine’s mechanism of action and suggests that there are converging NMDA and 5-HT1A receptor signaling cascades in PFC underlying the RAAD-like activities of ketamine and NLX-204. Acknowledgements: The study was financially supported by NCN grant no. 2019/35/B/NZ7/00787.Keywords: depression, ketamine, serotonin, 5-HT1A receptor, chronic mild stress
Procedia PDF Downloads 110177 Impact of COVID-19 on Study Migration
Authors: Manana Lobzhanidze
Abstract:
The COVID-19 pandemic has made significant changes in migration processes, notably changes in the study migration process. The constraints caused by the COVID-19 pandemic led to changes in the studying process, which negatively affected its efficiency. The educational process has partially or completely shifted to distance learning; Both labor and study migration have increased significantly in the world. The employment and education market has become global and consequently, a number of challenges have arisen for employers, researchers, and businesses. The role of preparing qualified personnel in achieving high productivity is justified, the benefits for employers and employees are assessed on the one hand, and the role of study migration for the country’s development is examined on the other hand. Research methods. The research is based on methods of analysis and synthesis, quantitative and qualitative, groupings, relative and mean quantities, graphical representation, comparison, analysis and etc. In-depth interviews were conducted with experts to determine quantitative and qualitative indicators. Research findings. Factors affecting study migration are analysed in the paper and the environment that stimulates migration is explored. One of the driving forces of migration is considered to be the desire for receiving higher pay. Levels and indicators of study migration are studied by country. Comparative analysis has found that study migration rates are high in countries where the price of skilled labor is high. The productivity of individuals with low skills is low, which negatively affects the economic development of countries. It has been revealed that students leave the country to improve their skills during study migration. The process mentioned in the article is evaluated as a positive event for a developing country, as individuals are given the opportunity to share the technology of developed countries, gain knowledge, and then introduce it in their own country. The downside of study migration is the return of a small proportion of graduates from developed economies to their home countries. The article concludes that countries with emerging economies devote less resources to research and development, while this is a priority in developed countries, allowing highly skilled individuals to use their skills efficiently. The paper studies the national education system examines the level of competition in the education market and the indicators of educational migration. The level of competition in the education market and the indicators of educational migration are studied. The role of qualified personnel in achieving high productivity is substantiated, the benefits of employers and employees are assessed on the one hand, and the role of study migration in the development of the country is revealed on the other hand. The paper also analyzes the level of competition in the education and labor markets and identifies indicators of study migration. During the pandemic period, there was a great demand for the digital technologies. Open access to a variety of comprehensive platforms will significantly reduce study migration to other countries. As a forecast, it can be said that the intensity of the use of e-learning platforms will be increased significantly in the post-pandemic period. The paper analyzes the positive and negative effects of study migration on economic development, examines the challenges of study migration in light of the COVID-19 pandemic, suggests ways to avoid negative consequences, and develops recommendations for improving the study migration process in the post-pandemic period.Keywords: study migration, COVID-19 pandemic, factors affecting migration, economic development, post-pandemic migration
Procedia PDF Downloads 125176 Audience Members' Perspective-Taking Predicts Accurate Identification of Musically Expressed Emotion in a Live Improvised Jazz Performance
Authors: Omer Leshem, Michael F. Schober
Abstract:
This paper introduces a new method for assessing how audience members and performers feel and think during live concerts, and how audience members' recognized and felt emotions are related. Two hypotheses were tested in a live concert setting: (1) that audience members’ cognitive perspective taking ability predicts their accuracy in identifying an emotion that a jazz improviser intended to express during a performance, and (2) that audience members' affective empathy predicts their likelihood of feeling the same emotions as the performer. The aim was to stage a concert with audience members who regularly attend live jazz performances, and to measure their cognitive and affective reactions during the performance as non-intrusively as possible. Pianist and Grammy nominee Andy Milne agreed, without knowing details of the method or hypotheses, to perform a full-length solo improvised concert that would include an ‘unusual’ piece. Jazz fans were recruited through typical advertising for New York City jazz performances. The event was held at the New School’s Glass Box Theater, the home of leading NYC jazz venue ‘The Stone.’ Audience members were charged typical NYC jazz club admission prices; advertisements informed them that anyone who chose to participate in the study would be reimbursed their ticket price after the concert. The concert, held in April 2018, had 30 attendees, 23 of whom participated in the study. Twenty-two minutes into the concert, the performer was handed a paper note with the instruction: ‘Perform a 3-5-minute improvised piece with the intention of conveying sadness.’ (Sadness was chosen based on previous music cognition lab studies, where solo listeners were less likely to select sadness as the musically-expressed emotion accurately from a list of basic emotions, and more likely to misinterpret sadness as tenderness). Then, audience members and the performer were invited to respond to a questionnaire from a first envelope under their seat. Participants used their own words to describe the emotion the performer had intended to express, and then to select the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s differential emotions scale. The concert then continued as usual. At the end, participants answered demographic questions and Davis’ interpersonal reactivity index (IRI), a 28-item scale designed to assess both cognitive and affective empathy. Hypothesis 1 was supported: audience members with greater cognitive empathy were more likely to accurately identify sadness as the expressed emotion. Moreover, audience members who accurately selected ‘sadness’ reported feeling marginally sadder than people who did not select sadness. Hypotheses 2 was not supported; audience members with greater affective empathy were not more likely to feel the same emotions as the performer. If anything, members with lower cognitive perspective-taking ability had marginally greater emotional overlap with the performer, which makes sense given that these participants were less likely to identify the music as sad, which corresponded with the performer’s actual feelings. Results replicate findings from solo lab studies in a concert setting and demonstrate the viability of exploring empathy and collective cognition in improvised live performance.Keywords: audience, cognition, collective cognition, emotion, empathy, expressed emotion, felt emotion, improvisation, live performance, recognized emotion
Procedia PDF Downloads 131175 Green Building Risks: Limits on Environmental and Health Quality Metrics for Contractors
Authors: Erica Cochran Hameen, Bobuchi Ken-Opurum, Mounica Guturu
Abstract:
The United Stated (U.S.) populous spends the majority of their time indoors in spaces where building codes and voluntary sustainability standards provide clear Indoor Environmental Quality (IEQ) metrics. The existing sustainable building standards and codes are aimed towards improving IEQ, health of occupants, and reducing the negative impacts of buildings on the environment. While they address the post-occupancy stage of buildings, there are fewer standards on the pre-occupancy stage thereby placing a large labor population in environments much less regulated. Construction personnel are often exposed to a variety of uncomfortable and unhealthy elements while on construction sites, primarily thermal, visual, acoustic, and air quality related. Construction site power generators, equipment, and machinery generate on average 9 decibels (dBA) above the U.S. OSHA regulations, creating uncomfortable noise levels. Research has shown that frequent exposure to high noise levels leads to chronic physiological issues and increases noise induced stress, yet beyond OSHA no other metric focuses directly on the impacts of noise on contractors’ well-being. Research has also associated natural light with higher productivity and attention span, and lower cases of fatigue in construction workers. However, daylight is not always available as construction workers often perform tasks in cramped spaces, dark areas, or at nighttime. In these instances, the use of artificial light is necessary, yet lighting standards for use during lengthy tasks and arduous activities is not specified. Additionally, ambient air, contaminants, and material off-gassing expelled at construction sites are one of the causes of serious health effects in construction workers. Coupled with extreme hot and cold temperatures for different climate zones, health and productivity can be seriously compromised. This research evaluates the impact of existing green building metrics on construction and risk management, by analyzing two codes and nine standards including LEED, WELL, and BREAM. These metrics were chosen based on the relevance to the U.S. construction industry. This research determined that less than 20% of the sustainability context within the standards and codes (texts) are related to the pre-occupancy building sector. The research also investigated the impact of construction personnel’s health and well-being on construction management through two surveys of project managers and on-site contractors’ perception of their work environment on productivity. To fully understand the risks of limited Environmental and Health Quality metrics for contractors (EHQ) this research evaluated the connection between EHQ factors such as inefficient lighting, on construction workers and investigated the correlation between various site coping strategies for comfort and productivity. Outcomes from this research are three-pronged. The first includes fostering a discussion about the existing conditions of EQH elements, i.e. thermal, lighting, ergonomic, acoustic, and air quality on the construction labor force. The second identifies gaps in sustainability standards and codes during the pre-occupancy stage of building construction from ground-breaking to substantial completion. The third identifies opportunities for improvements and mitigation strategies to improve EQH such as increased monitoring of effects on productivity and health of contractors and increased inclusion of the pre-occupancy stage in green building standards.Keywords: construction contractors, health and well-being, environmental quality, risk management
Procedia PDF Downloads 131