Search results for: thermo-mechanical model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16885

Search results for: thermo-mechanical model

625 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip

Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac

Abstract:

Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.

Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating

Procedia PDF Downloads 218
624 Influence of Structured Capillary-Porous Coatings on Cryogenic Quenching Efficiency

Authors: Irina P. Starodubtseva, Aleksandr N. Pavlenko

Abstract:

Quenching is a term generally accepted for the process of rapid cooling of a solid that is overheated above the thermodynamic limit of the liquid superheat. The main objective of many previous studies on quenching is to find a way to reduce the total time of the transient process. Computational experiments were performed to simulate quenching by a falling liquid nitrogen film of an extremely overheated vertical copper plate with a structured capillary-porous coating. The coating was produced by directed plasma spraying. Due to the complexities in physical pattern of quenching from chaotic processes to phase transition, the mechanism of heat transfer during quenching is still not sufficiently understood. To our best knowledge, no information exists on when and how the first stable liquid-solid contact occurs and how the local contact area begins to expand. Here we have more models and hypotheses than authentically established facts. The peculiarities of the quench front dynamics and heat transfer in the transient process are studied. The created numerical model determines the quench front velocity and the temperature fields in the heater, varying in space and time. The dynamic pattern of the running quench front obtained numerically satisfactorily correlates with the pattern observed in experiments. Capillary-porous coatings with straight and reverse orientation of crests are investigated. The results show that the cooling rate is influenced by thermal properties of the coating as well as the structure and geometry of the protrusions. The presence of capillary-porous coating significantly affects the dynamics of quenching and reduces the total quenching time more than threefold. This effect is due to the fact that the initialization of a quench front on a plate with a capillary-porous coating occurs at a temperature significantly higher than the thermodynamic limit of the liquid superheat, when a stable solid-liquid contact is thermodynamically impossible. Waves present on the liquid-vapor interface and protrusions on the complex micro-structured surface cause destabilization of the vapor film and the appearance of local liquid-solid micro-contacts even though the average integral surface temperature is much higher than the liquid superheat limit. The reliability of the results is confirmed by direct comparison with experimental data on the quench front velocity, the quench front geometry, and the surface temperature change over time. Knowledge of the quench front velocity and total time of transition process is required for solving practically important problems of nuclear reactors safety.

Keywords: capillary-porous coating, heat transfer, Leidenfrost phenomenon, numerical simulation, quenching

Procedia PDF Downloads 130
623 The Effect of Slum Neighborhoods on Pregnancy Outcomes in Tanzania: Secondary Analysis of the 2015-2016 Tanzania Demographic and Health Survey Data

Authors: Luisa Windhagen, Atsumi Hirose, Alex Bottle

Abstract:

Global urbanization has resulted in the expansion of slums, leaving over 10 million Tanzanians in urban poverty and at risk of poor health. Whilst rural residence has historically been associated with an increased risk of adverse pregnancy outcomes, recent studies found higher perinatal mortality rates in urban Tanzania. This study aims to understand to what extent slum neighborhoods may account for the spatial disparities seen in Tanzania. We generated a slum indicator based on UN-HABITAT criteria to identify slum clusters within the 2015-2016 Tanzania Demographic and Health Survey. Descriptive statistics, disaggregated by urban slum, urban non-slum, and rural areas, were produced. Simple and multivariable logistic regression examined the association between cluster residence type and neonatal mortality and stillbirth. For neonatal mortality, we additionally built a multilevel logistic regression model, adjusting for confounding and clustering. The neonatal mortality ratio was highest in slums (38.3 deaths per 1000 live births); the stillbirth rate was three times higher in slums (32.4 deaths per 1000 births) than in urban non-slums. Neonatal death was more likely to occur in slums than in urban non-slums (aOR=2.15, 95% CI=1.02-4.56) and rural areas (aOR=1.78, 95% CI=1.15-2.77). Odds of stillbirth were over five times higher among rural than urban non-slum residents (aOR=5.25, 95% CI=1.31-20.96). The results suggest that slums contribute to the urban disadvantage in Tanzanian neonatal health. Higher neonatal mortality in slums may be attributable to lack of education, lower socioeconomic status, poor healthcare access, and environmental factors, including indoor and outdoor air pollution and unsanitary conditions from inadequate housing. However, further research is required to ascertain specific causalities as well as significant associations between residence type and other pregnancy outcomes. The high neonatal mortality, stillbirth, and slum formation rates in Tanzania signify that considerable change is necessary to achieve international goals for health and human settlements. Disparities in access to adequate housing, safe water and sanitation, high standard antenatal, intrapartum, and neonatal care, and maternal education need to urgently be addressed. This study highlights the spatial neonatal mortality shift from rural settings to urban informal settlements in Tanzania. Importantly, other low- and middle-income countries experiencing overwhelming urbanization and slum expansion may also be at risk of a reversing trend in residential neonatal health differences.

Keywords: urban health, slum residence, neonatal mortality, stillbirth, global urbanisation

Procedia PDF Downloads 64
622 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 129
621 The Effect of Nanocomposite on the Release of Imipenem on Bacteria Causing Infections with Implants

Authors: Mohammad Hossein Pazandeh, Monir Doudi, Sona Rostampour Yasouri

Abstract:

—Results The prudent administration of antibiotics aims to avoid the side effects and the microbes' resistance to antibiotics. An approach developing methods of local administration of antibiotics is especially required for localized infections caused by bacterial colonization of medical devices or implant materials. Among the wide variety of materials used as drug delivery systems, bioactive glasses (BG) have large utilization in regenerative medicine . firstly, the production of bioactive glass/nickel oxide/tin dioxide nanocomposite using sol-gel method, and then, the controlled release of imipenem from the double metal oxide/bioactive glass nanocomposite, and finally, the investigation of the antibacterial property of the nanocomposite. against a number of implant-related infectious agents. In this study, BG/SnO2 and BG/NiO single systema with different metal oxide present and BG/NiO/SnO2 nanocomposites were synthesized by sol-gel as drug carriers for tetracycline and imepinem. These two antibiotics were widely used for osteomyelitis because of its favorable penetration and bactericidal effect on all the probable osteomyelitis pathogens. The antibacterial activity of synthesized samples were evaluated against Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa as bacteria model using disk diffusion method. The BG modification using metal oxides results to antibacterial property of samples containing metal oxide with highest efficiency for nancomposite. bioactivity of all samples was assessed by determining the surface morphology, structural and composition changes using scanning electron microscopy (SEM), FTIR and X-ray diffraction (XRD) spectroscopy, respectively, after soaking in simulated body fluid (SBF) for 28 days. The hydroxyapatite formation was clearly observed as a bioactivity measurement. Then, BG nanocomposite sample was loaded using two antibiotics, separately and their release profiles were studied. The BG nancomposite sample was shown the slow and continuous drug releasing for a period of 72 hours which is desirable for a drug delivery system. The loaded antibiotic nanocomposite sample retaining antibacterial property and showing inactivation effect against bacteria under test. The modified bioactive glass forming hydroxyapatite with controlled release drug and effective against bacterial infections can be introduced as scaffolds for bone implants after clinical trials for biomedical applications . Considering the formation of biofilm by infectious bacteria after sticking on the surfaces of implants, medical devices, etc. Also, considering the complications of traditional methods, solving the problems caused by the above-mentioned microorganisms in technical and biomedical industries was one of the necessities of this research.

Keywords: antibacterial, bioglass, drug delivery system, sol- gel

Procedia PDF Downloads 62
620 Enterprises and Social Impact: A Review of the Changing Landscape

Authors: Suzhou Wei, Isobel Cunningham, Laura Bradley McCauley

Abstract:

Social enterprises play a significant role in resolving social issues in the modern world. In contrast to traditional commercial businesses, their main goal is to address social concerns rather than primarily maximize profits. This phenomenon in entrepreneurship is presenting new opportunities and different operating models and resulting in modified approaches to measure success beyond traditional market share and margins. This paper explores social enterprises to clarify their roles and approaches in addressing grand challenges related to social issues. In doing so, it analyses the key differences between traditional business and social enterprises, such as their operating model and value proposition, to understand their contributions to society. The research presented in this paper responds to calls for research to better understand social enterprises and entrepreneurship but also to explore the dynamics between profit-driven and socially-oriented entities to deliver mutual benefits. This paper, which examines the features of commercial business, suggests their primary focus is profit generation, economic growth and innovation. Beyond the chase of profit, it highlights the critical role of innovation typical of successful businesses. This, in turn, promotes economic growth, creates job opportunities and makes a major positive impact on people's lives. In contrast, the motivations upon which social enterprises are founded relate to a commitment to address social problems rather than maximizing profits. These entities combine entrepreneurial principles with commitments to deliver social impact and grand challenge changes, creating a distinctive category within the broader enterprise and entrepreneurship landscape. The motivations for establishing a social enterprise are diverse, such as encompassing personal fulfillment, a genuine desire to contribute to society and a focus on achieving impactful accomplishments. The paper also discusses the collaboration between commercial businesses and social enterprises, which is viewed as a strategic approach to addressing grand challenges more comprehensively and effectively. Finally, this paper highlights the evolving and diverse expectations placed on all businesses to actively contribute to society beyond profit-making. We conclude that there is an unrealized and underdeveloped potential for collaboration between commercial businesses and social enterprises to produce greater and long-lasting social impacts. Overall, the aim of this research is to encourage more investigation of the complex relationship between economic and social objectives and contributions through a better understanding of how and why businesses might address social issues. Ultimately, the paper positions itself as a tool for understanding the evolving landscape of business engagement with social issues and advocates for collaborative efforts to achieve sustainable and impactful outcomes.

Keywords: business, social enterprises, collaboration, social issues, motivations

Procedia PDF Downloads 53
619 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 120
618 Development and Experimental Evaluation of a Semiactive Friction Damper

Authors: Juan S. Mantilla, Peter Thomson

Abstract:

Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.

Keywords: earthquake response, friction damper, semiactive control, shaking table

Procedia PDF Downloads 378
617 Investigation of Permeate Flux Through Direct Contact Membrane Distillation Module by Inserting S-Ribs Carbon-Fiber Promoters with Ascending and Descending Hydraulic Diameters

Authors: Chii-Dong Ho, Jian-Har Chen

Abstract:

The decline in permeate flux across membrane modules is attributed to the increase in temperature polarization resistance in flat-plate direct contact membrane distillation (DCMD) modules for pure water productivity. Researchers have discovered that this effect can be diminished by embedding turbulence promoters, which augment turbulence intensity at the cost of increased power consumption, thereby improving vapor permeate flux. The device performance of DCMD modules for permeate flux was further enhanced by shrinking the hydraulic diameters of inserted S-ribs carbon-fiber promoters as well as considering the energy consumption increment. The mass-balance formulation, based on the resistance-in-series model by energy conservation in one-dimensional governing equations, was developed theoretically and conducted experimentally on a flat-plate polytetrafluoroethylene/polypropylene (PTFE/PP) membrane module to predict permeate flux and temperature distributions. The ratio of permeate flux enhancement to energy consumption increment, as referred to an assessment of an economic viewpoint and technical feasibilities, was calculated to determine the suitable design parameters for DCMD operations with the insertion of S-ribs carbon-fiber turbulence promoters. An economic analysis was also performed, weighing both permeate flux improvement and energy consumption increment on modules with promoter-filled channels by different array configurations and various hydraulic diameters of turbulence promoters. Results showed that the ratio of permeate flux improvement to energy consumption increment in descending hydraulic-diameter modules is higher than in uniform hydraulic-diameter modules. The fabrication details of the DCMD module filaments implementing the S-ribs carbon-fiber filaments and the schematic configuration of the flat-plate DCMD experimental setup with presenting acrylic plates as external walls were demonstrated in the present study. The S-ribs carbon fibers perform as turbulence promoters incorporated into the artificial hot saline feed stream, which was prepared by adding inorganic salts (NaCl) to distilled water. Theoretical predictions and experimental results exhibited a great accomplishment to considerably achieve permeate flux enhancement in such as new design of the DCMD module with inserting S-ribs carbon-fiber promoters. Additionally, the Nusselt number for the water vapor transferring membrane module with inserted S-ribs carbon-fiber promoters was generalized into a simplified expression to predict the heat transfer coefficient and permeate flux as well.

Keywords: permeate flux, Nusselt number, DCMD module, temperature polarization, hydraulic diameters

Procedia PDF Downloads 12
616 Destruction of Colon Cells by Nanocontainers of Ferromagnetic

Authors: Lukasz Szymanski, Zbigniew Kolacinski, Grzegorz Raniszewski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza, Karolina Przybylowska-Sygut, Ireneusz Majsterek, Zbigniew Kaminski, Justyna Fraczyk, Malgorzata Walczak, Beata Kolasinska, Adam Bednarek, Joanna Konka

Abstract:

The aim of this work is to investigate the influence of electromagnetic field from the range of radio frequencies on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon - ferromagnetic nanocontainers (FNCs) includes: The synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. Biochemical functionalization of ferromagnetic nanocontainers is necessary in order to increase the binding selectively with receptors presented on the surface of tumour cells. Multi-step modification procedure was finally used to attach folic acid on the surface of ferromagnetic nanocontainers. Pristine ferromagnetic carbon nanotubes are not suitable for application in medicine and biotechnology. Appropriate functionalization of ferromagnetic carbon nanotubes allows to receiving materials useful in medicine. Finally, a product contains folic acids on the surface of FNCs. The folic acid is a ligand of folate receptors – α which is overexpressed on the surface of epithelial tumours cells. It is expected that folic acids will be recognized and selectively bound by receptors presented on the surface of tumour cells. In our research, FNCs were covalently functionalized in a multi-step procedure. Ferromagnetic carbon nanotubes were oxidated using different oxidative agents. For this purpose, strong acids such as HNO3, or mixture HNO3 and H2SO4 were used. Reactive carbonyl and carboxyl groups were formed on the open sides and at the defects on the sidewalls of FNCs. These groups allow further modification of FNCs as a reaction of amidation, reaction of introduction appropriate linkers which separate solid surface of FNCs and ligand (folic acid). In our studies, amino acid and peptide have been applied as ligands. The last step of chemical modification was reaction-condensation with folic acid. In all reaction as coupling reagents were used derivatives of 1,3,5-triazine. The first trials in the device for hyperthermal RF generator have been done. The frequency of RF generator was in the ranges from 10 to 14Mhz and from 265 to 621kHz. Obtained functionalized nanoparticles enabled to reach the temperature of denaturation tumor cells in given frequencies.

Keywords: cancer colon cells, carbon nanotubes, hyperthermia, ligands

Procedia PDF Downloads 313
615 Work-Family Conflict and Family and Job Resources among Women: The Role of Negotiation

Authors: Noa Nelson, Meitar Moshe, Dana Cohen

Abstract:

Work-family conflict (WFC) is a significant source of stress for contemporary employees, with research indicating its heightened severity for women. The conservation of resources theory argues that individuals experience stress when their resources fall short of demands, and attempt to reach balance by obtaining resources. Presumably then, to achieve work-family balance women would need to negotiate for resources such as spouse support, employer support and work flexibility. The current research tested the hypotheses that competent negotiation at home and at work associated with increased family and job resources and with decreased WFC, as well as with higher work, marital and life satisfaction. In the first study, 113 employed mothers, married or cohabiting, reported to what extent they conducted satisfactory negotiation with spouse over division of housework, and their actual housework load compared to spouse. They answered a WFC questionnaire, measuring how much work interferes with family (WIF) and how much family interferes with work (FIW), and finally, measurements of satisfaction. In the second study, 94 employed mothers, married or cohabiting reported to what extent they conducted satisfactory negotiation with their boss over balancing work demands with family needs. They reported the levels of three job resources: flexibility, control and family-friendly organizational culture. Finally, they answered the same WFC and satisfaction measurements from study 1. Statistical analyses –t-tests, correlations, and hierarchical linear regressions- showed that in both studies, women reported higher WIF than FIW. Negotiations associated with increased resources: support from spouse, work flexibility and control and a family-friendly culture; negotiation with spouse associated also with satisfaction measurements. However, negotiations or resources (except family-friendly culture) did not associate with reduced conflict. The studies demonstrate the role of negotiation in obtaining family and job resources. Causation cannot be determined, but the fact is that employed mothers who enjoyed more support (at both home and work), flexibility and control, were more likely to keep active interactions to increase them. This finding has theoretical and practical implications, especially in view of research on female avoidance of negotiation. It is intriguing that negotiations and resources generally did not associate with reduced WFC. This finding might reflect the severity of the conflict, especially of work interfering with family, which characterizes many contemporary jobs. It might also suggest that employed mothers have high expectations from themselves, and even under supportive circumstances, experience the challenge of balancing two significant and demanding roles. The research contributes to the fields of negotiation, gender, and work-life balance. It calls for further studies, to test its model in additional populations and validate the role employees have in actively negotiating for the balance that they need. It also calls for further research to understand the contributions of job and family resources to reducing work-family conflict, and the circumstances under which they contribute.

Keywords: sork-family conflict, work-life balance, negotiation, gender, job resources, family resources

Procedia PDF Downloads 228
614 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans

Authors: Jian Wu

Abstract:

Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.

Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory

Procedia PDF Downloads 187
613 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting

Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva

Abstract:

The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.

Keywords: dipole antenna, double-band, high efficiency, rectenna

Procedia PDF Downloads 125
612 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 150
611 The Cultural Shift in Pre-owned Fashion as Sustainable Consumerism in Vietnam

Authors: Lam Hong Lan

Abstract:

The textile industry is said to be the second-largest polluter, responsible for 92 million tonnes of waste annually. There is an urgent need to practice the circular economy to increase the use and reuse around the world. By its nature, the pre-owned fashion business is considered part of the circular economy as it helps to eliminate waste and circulate products. Second-hand clothes and accessories used to be associated with a ‘cheap image’ that carried ‘old energy’ in Vietnam. This perception has been shifted, especially amongst the younger generation. Vietnamese consumer is spending more on products and services that increase self-esteem. The same consumer is moving away from a collectivist social identity towards a ‘me, not we’ outlook as they look for a way to express their individual identity. And pre-owned fashion is one of their solutions as it values money, can create a unique personal style for the wearer and links with sustainability. The design of this study is based on the second-hand shopping motivation theory. A semi-structured online survey with 100 consumers from one pre-owned clothing community and one pre-owned e-commerce site in Vietnam. The findings show that in contrast with Vietnamese older consumers (55+yo) who, in the previous study, generally associated pre-owned fashion with ‘low-cost’, ‘cheap image’ that carried ‘old energy’, young customers (20-30 yo) were actively promoted their pre-owned fashion items to the public via outlet’s social platforms and their social media. This cultural shift comes from the impact of global and local discourse around sustainable fashion and the growth of digital platforms in the pre-owned fashion business in the last five years, which has generally supported wider interest in pre-owned fashion in Vietnam. It can be summarised in three areas: (1) global and local celebrity influencers. A number of celebrities have been photographed wearing vintage items in music videos, photoshoots or at red carpet events. (2) E-commerce and intermediaries. International e-commerce sites – e.g., Vinted, TheRealReal – and/or local apps – e.g., Re.Loved – can influence attitudes and behaviors towards pre-owned consumption. (3) Eco-awareness. The increased online coverage of climate change and environmental pollution has encouraged customers to adopt a more eco-friendly approach to their wardrobes. While sustainable biomaterials and designs are still navigating their way into sustainability, sustainable consumerism via pre-owned fashion seems to be an immediate solution to lengthen the clothes lifecycle. This study has found that young consumers are primarily seeking value for money and/or a unique personal style from pre-owned/vintage fashion while using these purchases to promote their own “eco-awareness” via their social media networks. This is a good indication for fashion designers to keep in mind in their design process and for fashion enterprises in their business model’s choice to not overproduce fashion items.

Keywords: cultural shift, pre-owned fashion, sustainable consumption, sustainable fashion.

Procedia PDF Downloads 84
610 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India

Authors: Vinu Elias Jacob, Manoj Kumar Kini

Abstract:

Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.

Keywords: disaster management, resilience, spatial planning, spatial transformations

Procedia PDF Downloads 297
609 Predictors of Sexually Transmitted Infection of Korean Adolescent Females: Analysis of Pooled Data from Korean Nationwide Survey

Authors: Jaeyoung Lee, Minji Je

Abstract:

Objectives: In adolescence, adolescents are curious about sex, but sexual experience before becoming an adult can cause the risk of high probability of sexually transmitted infection. Therefore, it is very important to prevent sexually transmitted infections so that adolescents can grow in healthy and upright way. Adolescent females, especially, have sexual behavior distinguished from that of male adolescents. Protecting female adolescents’ reproductive health is even more important since it is directly related to the childbirth of the next generation. This study, thus, investigated the predictors of sexually transmitted infection in adolescent females with sexual experiences based on the National Health Statistics in Korea. Methods: This study was conducted based on the National Health Statistics in Korea. The 11th Korea Youth Behavior Web-based Survey in 2016 was conducted in the type of anonymous self-reported survey in order to find out the health behavior of adolescents. The target recruitment group was middle and high school students nationwide as of April 2016, and 65,528 students from a total of 800 middle and high schools participated. The study was conducted in 537 female high school students (Grades 10–12) among them. The collected data were analyzed as complex sampling design using SPSS statistics 22. The strata, cluster, weight, and finite population correction provided by Korea Center for Disease Control & Prevention (KCDC) were reflected to constitute complex sample design files, which were used in the statistical analysis. The analysis methods included Rao-Scott chi-square test, complex samples general linear model, and complex samples multiple logistic regression analysis. Results: Out of 537 female adolescents, 11.9% (53 adolescents) had experiences of venereal infection. The predictors for venereal infection of the subjects were ‘age at first intercourse’ and ‘sexual intercourse after drinking’. The sexually transmitted infection of the subjects was decreased by 0.31 times (p=.006, 95%CI=0.13-0.71) for middle school students and 0.13 times (p<.001, 95%CI=0.05-0.32) for high school students whereas the age of the first sexual experience was under elementary school age. In addition, the sexually transmitted infection of the subjects was 3.54 times (p < .001, 95%CI=1.76-7.14) increased when they have experience of sexual relation after drinking alcohol, compared to those without the experience of sexual relation after drinking alcohol. Conclusions: The female adolescents had high probability of sexually transmitted infection if their age for the first sexual experience was low. Therefore, the female adolescents who start sexual experience earlier shall have practical sex education appropriate for their developmental stage. In addition, since the sexually transmitted infection increases, if they have sexual relations after drinking alcohol, the consideration for prevention of alcohol use or intervention of sex education shall be required. When health education intervention is conducted for health promotion for female adolescents in the future, it is necessary to reflect the result of this study.

Keywords: adolescent, coitus, female, sexually transmitted diseases

Procedia PDF Downloads 192
608 Evaluation of Cyclic Steam Injection in Multi-Layered Heterogeneous Reservoir

Authors: Worawanna Panyakotkaew, Falan Srisuriyachai

Abstract:

Cyclic steam injection (CSI) is a thermal recovery technique performed by injecting periodically heated steam into heavy oil reservoir. Oil viscosity is substantially reduced by means of heat transferred from steam. Together with gas pressurization, oil recovery is greatly improved. Nevertheless, prediction of effectiveness of the process is difficult when reservoir contains degree of heterogeneity. Therefore, study of heterogeneity together with interest reservoir properties must be evaluated prior to field implementation. In this study, thermal reservoir simulation program is utilized. Reservoir model is firstly constructed as multi-layered with coarsening upward sequence. The highest permeability is located on top layer with descending of permeability values in lower layers. Steam is injected from two wells located diagonally in quarter five-spot pattern. Heavy oil is produced by adjusting operating parameters including soaking period and steam quality. After selecting the best conditions for both parameters yielding the highest oil recovery, effects of degree of heterogeneity (represented by Lorenz coefficient), vertical permeability and permeability sequence are evaluated. Surprisingly, simulation results show that reservoir heterogeneity yields benefits on CSI technique. Increasing of reservoir heterogeneity impoverishes permeability distribution. High permeability contrast results in steam intruding in upper layers. Once temperature is cool down during back flow period, condense water percolates downward, resulting in high oil saturation on top layers. Gas saturation appears on top after while, causing better propagation of steam in the following cycle due to high compressibility of gas. Large steam chamber therefore covers most of the area in upper zone. Oil recovery reaches approximately 60% which is of about 20% higher than case of heterogeneous reservoir. Vertical permeability exhibits benefits on CSI. Expansion of steam chamber occurs within shorter time from upper to lower zone. For fining upward permeability sequence where permeability values are reversed from the previous case, steam does not override to top layers due to low permeability. Propagation of steam chamber occurs in middle of reservoir where permeability is high enough. Rate of oil recovery is slower compared to coarsening upward case due to lower permeability at the location where propagation of steam chamber occurs. Even CSI technique produces oil quite slowly in early cycles, once steam chamber is formed deep in the reservoir, heat is delivered to formation quickly in latter cycles. Since reservoir heterogeneity is unavoidable, a thorough understanding of its effect must be considered. This study shows that CSI technique might be one of the compatible solutions for highly heterogeneous reservoir. This competitive technique also shows benefit in terms of heat consumption as steam is injected periodically.

Keywords: cyclic steam injection, heterogeneity, reservoir simulation, thermal recovery

Procedia PDF Downloads 459
607 The Effect of Vibration Amplitude on Tissue Temperature and Lesion Size When Using a Vibrating Cardiac Catheter

Authors: Kaihong Yu, Tetsui Yamashita, Shigeaki Shingyochi, Kazuo Matsumoto, Makoto Ohta

Abstract:

During cardiac ablation, high power delivery for deeper lesion formation is limited by electrode-tissue interface overheating which can cause serious complications such as thrombus. To prevent this overheating, temperature control and open irrigation are often used. In temperature control, radiofrequency generator is adjusted to deliver the maximum output power, which maintains the electrode temperature at a target temperature (commonly 55°C or 60°C). Then the electrode-tissue interface temperature is also limited. The electrode temperature is a result of heating from the contacted tissue and cooling from the surrounding blood. Because the cooling from blood is decreased under conditions of low blood flow, the generator needs to decrease the output power. Thus, temperature control cannot deliver high power under conditions of low blood flow. In open irrigation, saline in room temperature is flushed through the holes arranged in the electrode. The electrode-tissue interface is cooled by the sufficient environmental cooling. And high power delivery can also be done under conditions of low blood flow. However, a large amount of saline infusions (approximately 1500 ml) during irrigation can cause other serious complication. When open irrigation cannot be used under conditions of low blood flow, a new overheating prevention may be required. The authors have proposed a new electrode cooling method by making the catheter vibrating. The previous work has introduced that the vibration can make a cooling effect on electrode, which may result form that the vibration could increase the flow velocity around the catheter. The previous work has also proved that increasing vibration frequency can increase the cooling by vibration. However, the effect of the vibration amplitude is still unknown. Thus, the present study investigated the effect of vibration amplitude on tissue temperature and lesion size. An agar phantom model was used as a tissue-equivalent material for measuring tissue temperature. Thermocouples were inserted into the agar to measure the internal temperature. Porcine myocardium was used for lesion size measurement. A normal ablation catheter was set perpendicular to the tissue (agar or porcine myocardium) with 10 gf contact force in 37°C saline without flow. Vibration amplitude of ± 0.5, ± 0.75, and ± 1.0 mm with a constant frequency (31 Hz or 63) was used. A temperature control protocol (45°C for agar phantom, 60°C for porcine myocardium) was used for the radiofrequency applications. The larger amplitude shows the larger lesion sizes. And the higher tissue temperatures in agar phantom are also shown with the higher amplitude. With a same frequency, the larger amplitude has the higher vibrating speed. And the higher vibrating speed will increase the flow velocity around the electrode more, which leads to a larger electrode temperature decrease. To maintain the electrode at the target temperature, ablator has to increase the output power. With the higher output power in the same duration, the released energy also increases. Consequently, the tissue temperature will be increased and lead to larger lesion sizes.

Keywords: cardiac ablation, electrode cooling, lesion size, tissue temperature

Procedia PDF Downloads 372
606 High Strain Rate Behavior of Harmonic Structure Designed Pure Nickel: Mechanical Characterization Microstructure Analysis and 3D Modelisation

Authors: D. Varadaradjou, H. Kebir, J. Mespoulet, D. Tingaud, S. Bouvier, P. Deconick, K. Ameyama, G. Dirras

Abstract:

The development of new architecture metallic alloys with controlled microstructures is one of the strategic ways for designing materials with high innovation potential and, particularly, with improved mechanical properties as required for structural materials. Indeed, unlike conventional counterparts, metallic materials having so-called harmonic structure displays strength and ductility synergy. The latter occurs due to a unique microstructure design: a coarse grain structure surrounded by a 3D continuous network of ultra-fine grain known as “core” and “shell,” respectively. In the present study, pure harmonic-structured (HS) Nickel samples were processed via controlled mechanical milling and followed by spark plasma sintering (SPS). The present work aims at characterizing the mechanical properties of HS pure Nickel under room temperature dynamic loading through a Split Hopkinson Pressure Bar (SHPB) test and the underlying microstructure evolution. A stopper ring was used to maintain the strain at a fixed value of about 20%. Five samples (named B1 to B5) were impacted using different striker bar velocities from 14 m/s to 28 m/s, yielding strain rate in the range 4000-7000 s-1. Results were considered until a 10% deformation value, which is the deformation threshold for the constant strain rate assumption. The non-deformed (INIT – post-SPS process) and post-SHPB microstructure (B1 to B5) were investigated by EBSD. It was observed that while the strain rate is increased, the average grain size within the core decreases. An in-depth analysis of grains and grain boundaries was made to highlight the thermal (such as dynamic recrystallization) or mechanical (such as grains fragmentation by dislocation) contribution within the “core” and “shell.” One of the most widely used methods for determining the dynamic behavior of materials is the SHPB technique developed by Kolsky. A 3D simulation of the SHPB test was created through ABAQUS in dynamic explicit. This 3D simulation allows taking into account all modes of vibration. An inverse approach was used to identify the material parameters from the equation of Johnson-Cook (JC) by minimizing the difference between the numerical and experimental data. The JC’s parameters were identified using B1 and B5 samples configurations. Predictively, identified parameters of JC’s equation shows good result for the other sample configuration. Furthermore, mean rise of temperature within the harmonic Nickel sample can be obtained through ABAQUS and show an elevation of about 35°C for all fives samples. At this temperature, a thermal mechanism cannot be activated. Therefore, grains fragmentation within the core is mainly due to mechanical phenomena for a fixed final strain of 20%.

Keywords: 3D simulation, fragmentation, harmonic structure, high strain rate, Johnson-cook model, microstructure

Procedia PDF Downloads 231
605 Chemical vs Visual Perception in Food Choice Ability of Octopus vulgaris (Cuvier, 1797)

Authors: Al Sayed Al Soudy, Valeria Maselli, Gianluca Polese, Anna Di Cosmo

Abstract:

Cephalopods are considered as a model organism with a rich behavioral repertoire. Sophisticated behaviors were widely studied and described in different species such as Octopus vulgaris, who has evolved the largest and more complex nervous system among invertebrates. In O. vulgaris, cognitive abilities in problem-solving tasks and learning abilities are associated with long-term memory and spatial memory, mediated by highly developed sensory organs. They are equipped with sophisticated eyes, able to discriminate colors even with a single photoreceptor type, vestibular system, ‘lateral line analogue’, primitive ‘hearing’ system and olfactory organs. They can recognize chemical cues either through direct contact with odors sources using suckers or by distance through the olfactory organs. Cephalopods are able to detect widespread waterborne molecules by the olfactory organs. However, many volatile odorant molecules are insoluble or have a very low solubility in water, and must be perceived by direct contact. O. vulgaris, equipped with many chemosensory neurons located in their suckers, exhibits a peculiar behavior that can be provocatively described as 'smell by touch'. The aim of this study is to establish the priority given to chemical vs. visual perception in food choice. Materials and methods: Three different types of food (anchovies, clams, and mussels) were used, and all sessions were recorded with a digital camera. During the acclimatization period, Octopuses were exposed to the three types of food to test their natural food preferences. Later, to verify if food preference is maintained, food was provided in transparent screw-jars with pierced lids to allow both visual and chemical recognition of the food inside. Subsequently, we tested alternatively octopuses with food in sealed transparent screw-jars and food in blind screw-jars with pierced lids. As a control, we used blind sealed jars with the same lid color to verify a random choice among food types. Results and discussion: During the acclimatization period, O. vulgaris shows a higher preference for anchovies (60%) followed by clams (30%), then mussels (10%). After acclimatization, using the transparent and pierced screw jars octopus’s food choices resulted in 50-50 between anchovies and clams, avoiding mussels. Later, guided by just visual sense, with transparent but not pierced jars, their food preferences resulted in 100% anchovies. With pierced but not transparent jars their food preference resulted in 100% anchovies as first food choice, the clams as a second food choice result (33.3%). With no possibility to select food, neither by vision nor by chemoreception, the results were 20% anchovies, 20% clams, and 60% mussels. We conclude that O. vulgaris uses both chemical and visual senses in an integrative way in food choice, but if we exclude one of them, it appears clear that its food preference relies on chemical sense more than on visual perception.

Keywords: food choice, Octopus vulgaris, olfaction, sensory organs, visual sense

Procedia PDF Downloads 221
604 Regional Rates of Sand Supply to the New South Wales Coast: Southeastern Australia

Authors: Marta Ribo, Ian D. Goodwin, Thomas Mortlock, Phil O’Brien

Abstract:

Coastal behavior is best investigated using a sediment budget approach, based on the identification of sediment sources and sinks. Grain size distribution over the New South Wales (NSW) continental shelf has been widely characterized since the 1970’s. Coarser sediment has generally accumulated on the outer shelf, and/or nearshore zones, with the latter related to the presence of nearshore reef and bedrocks. The central part of the NSW shelf is characterized by the presence of fine sediments distributed parallel to the coastline. This study presents new grain size distribution maps along the NSW continental shelf, built using all available NSW and Commonwealth Government holdings. All available seabed bathymetric data form prior projects, single and multibeam sonar, and aerial LiDAR surveys were integrated into a single bathymetric surface for the NSW continental shelf. Grain size information was extracted from the sediment sample data collected in more than 30 studies. The information extracted from the sediment collections varied between reports. Thus, given the inconsistency of the grain size data, a common grain size classification was her defined using the phi scale. The new sediment distribution maps produced, together with new detailed seabed bathymetric data enabled us to revise the delineation of sediment compartments to more accurately reflect the true nature of sediment movement on the inner shelf and nearshore. Accordingly, nine primary mega coastal compartments were delineated along the NSW coast and shelf. The sediment compartments are bounded by prominent nearshore headlands and reefs, and major river and estuarine inlets that act as sediment sources and/or sinks. The new sediment grain size distribution was used as an input in the morphological modelling to quantify the sediment transport patterns (and indicative rates of transport), used to investigate sand supply rates and processes from the lower shoreface to the NSW coast. The rate of sand supply to the NSW coast from deep water is a major uncertainty in projecting future coastal response to sea-level rise. Offshore transport of sand is generally expected as beaches respond to rising sea levels but an onshore supply from the lower shoreface has the potential to offset some of the impacts of sea-level rise, such as coastline recession. Sediment exchange between the lower shoreface and sub-aerial beach has been modelled across the south, central, mid-north and far-north coast of NSW. Our model approach is that high-energy storm events are the primary agents of sand transport in deep water, while non-storm conditions are responsible for re-distributing sand within the beach and surf zone.

Keywords: New South Wales coast, off-shore transport, sand supply, sediment distribution maps

Procedia PDF Downloads 228
603 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 135
602 Thermal Imaging of Aircraft Piston Engine in Laboratory Conditions

Authors: Lukasz Grabowski, Marcin Szlachetka, Tytus Tulwin

Abstract:

The main task of the engine cooling system is to maintain its average operating temperatures within strictly defined limits. Too high or too low average temperatures result in accelerated wear or even damage to the engine or its individual components. In order to avoid local overheating or significant temperature gradients, leading to high stresses in the component, the aim is to ensure an even flow of air. In the case of analyses related to heat exchange, one of the main problems is the comparison of temperature fields because standard measuring instruments such as thermocouples or thermistors only provide information about the course of temperature at a given point. Thermal imaging tests can be helpful in this case. With appropriate camera settings and taking into account environmental conditions, we are able to obtain accurate temperature fields in the form of thermograms. Emission of heat from the engine to the engine compartment is an important issue when designing a cooling system. Also, in the case of liquid cooling, the main sources of heat in the form of emissions from the engine block, cylinders, etc. should be identified. It is important to redesign the engine compartment ventilation system. Ensuring proper cooling of aircraft reciprocating engine is difficult not only because of variable operating range but mainly because of different cooling conditions related to the change of speed or altitude of flight. Engine temperature also has a direct and significant impact on the properties of engine oil, which under the influence of this parameter changes, in particular, its viscosity. Too low or too high, its value can be a result of fast wear of engine parts. One of the ways to determine the temperatures occurring on individual parts of the engine is the use of thermal imaging measurements. The article presents the results of preliminary thermal imaging tests of aircraft piston diesel engine with a maximum power of about 100 HP. In order to perform the heat emission tests of the tested engine, the ThermaCAM S65 thermovision monitoring system from FLIR (Forward-Looking Infrared) together with the ThermaCAM Researcher Professional software was used. The measurements were carried out after the engine warm up. The engine speed was 5300 rpm The measurements were taken for the following environmental parameters: air temperature: 17 °C, ambient pressure: 1004 hPa, relative humidity: 38%. The temperatures distribution on the engine cylinder and on the exhaust manifold were analysed. Thermal imaging tests made it possible to relate the results of simulation tests to the real object by measuring the rib temperature of the cylinders. The results obtained are necessary to develop a CFD (Computational Fluid Dynamics) model of heat emission from the engine bay. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: aircraft, piston engine, heat, emission

Procedia PDF Downloads 119
601 Multimodal Analysis of News Magazines' Front-Page Portrayals of the US, Germany, China, and Russia

Authors: Alena Radina

Abstract:

On the global stage, national image is shaped by historical memory of wars and alliances, government ideology and particularly media stereotypes which represent countries in positive or negative ways. News magazine covers are a key site for national representation. The object of analysis in this paper is the portrayals of the US, Germany, China, and Russia in the front pages and cover stories of “Time”, “Der Spiegel”, “Beijing Review”, and “Expert”. Political comedy helps people learn about current affairs even if politics is not their area of interest, and thus satire indirectly sets the public agenda. Coupled with satirical messages, cover images and the linguistic messages embedded in the covers become persuasive visual and verbal factors, known to drive about 80% of magazine sales. Preliminary analysis identified satirical elements in magazine covers, which are known to influence and frame understandings and attract younger audiences. Multimodal and transnational comparative framing analyses lay the groundwork to investigate why journalists, editors and designers deploy certain frames rather than others. This research investigates to what degree frames used in covers correlate with frames within the cover stories and what these framings can tell us about media professionals’ representations of their own and other nations. The study sample includes 32 covers consisting of two covers representing each of the four chosen countries from the four magazines. The sampling framework considers two time periods to compare countries’ representation with two different presidents, and between men and women when present. The countries selected for analysis represent each category of the international news flows model: the core nations are the US and Germany; China is a semi-peripheral country; and Russia is peripheral. Examining textual and visual design elements on the covers and images in the cover stories reveals not only what editors believe visually attracts the reader’s attention to the magazine but also how the magazines frame and construct national images and national leaders. The cover is the most powerful editorial and design page in a magazine because images incorporate less intrusive framing tools. Thus, covers require less cognitive effort of audiences who may therefore be more likely to accept the visual frame without question. Analysis of design and linguistic elements in magazine covers helps to understand how media outlets shape their audience’s perceptions and how magazines frame global issues. While previous multimodal research of covers has focused mostly on lifestyle magazines or newspapers, this paper examines the power of current affairs magazines’ covers to shape audience perception of national image.

Keywords: framing analysis, magazine covers, multimodality, national image, satire

Procedia PDF Downloads 103
600 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation

Authors: Joanna Gerszon, Aleksandra Rodacka

Abstract:

In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).

Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation

Procedia PDF Downloads 167
599 Wood as a Climate Buffer in a Supermarket

Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø

Abstract:

Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.

Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast

Procedia PDF Downloads 218
598 Glycemic Control in Rice Consumption among Households with Diabetes Patients: The Role of Food Security

Authors: Chandanee Wasana Kalansooriya

Abstract:

Dietary behaviour is a crucial factor affecting diabetes control. With increasing rates of diabetes prevalence in Asian countries, examining their dietary patterns, which are largely based on rice, is timely required. It has been identified that higher consumption of some rice varieties is associated with increased risk of type 2 diabetes. Although diabetes patients are advised to consume healthier rice varieties, which contains low glycemic, several conditions, one of which food insecurity, make them difficult to preserve those healthy dietary guidelines. Hence this study tries to investigate how food security affects on making right decisions of rice consumption within diabetes affected households using a sample from Sri Lanka, a country which rice considered as the staple food and records the highest diabetes prevalence rate in South Asia. The study uses data from the Household Income and Expenditure Survey 2016, a nationally representative sample conducted by the Department of Census and Statistics, Sri Lanka. The survey used a two-stage stratified sampling method to cover different sectors and districts of the country and collected micro-data on demographics, health, income and expenditures of different categories. The study uses data from 2547 households which consist of one or more diabetes patients, based on the self-recorded health status. The Household Dietary Diversity Score (HDDS), which constructed based on twelve food groups, is used to measure the level of food security. Rice is categorized into three groups according to their Glycemic Index (GI), high GI, medium GI and low GI, and the likelihood and impact made by food security on each rice consumption categories are estimated using a Two-part Model. The shares of each rice categories out of total rice consumption is considered as the dependent variable to exclude the endogeneity issue between rice consumption and the HDDS. The results indicate that the consumption of medium GI rice is likely to increase with the increasing household food security, but low GI varieties are not. Households in rural and estate sectors are less likely and Tamil ethnic group is more likely to consume low GI rice varieties. Further, an increase in food security significantly decreases the consumption share of low GI rice, while it increases the share of medium GI varieties. The consumption share of low GI rice is largely affected by the ethnic variability. The effects of food security on the likelihood of consuming high GI rice varieties and changing its shares are statistically insignificant. Accordingly, the study concludes that a higher level of food security does not ensure diabetes patients are consuming healthy rice varieties or reducing consumption of unhealthy varieties. Hence policy attention must be directed towards educating people for making healthy dietary choices. Further, the study provides a room for further studies as it reveals considerable ethnic and sectorial differences in making healthy dietary decisions.

Keywords: diabetes, food security, glycemic index, rice consumption

Procedia PDF Downloads 103
597 Impact of Experiential Learning on Executive Function, Language Development, and Quality of Life for Adults with Intellectual and Developmental Disabilities (IDD)

Authors: Mary Deyo, Zmara Harrison

Abstract:

This study reports the outcomes of an 8-week experiential learning program for 6 adults with Intellectual and Developmental Disabilities (IDD) at a day habilitation program. The intervention foci for this program include executive function, language learning in the domains of expressive, receptive, and pragmatic language, and quality of life. The interprofessional collaboration aimed at supporting adults with IDD to reach person-centered, functional goals across skill domains is critical. This study is a significant addition to the speech-language pathology literature in that it examines a therapy method that potentially meets this need while targeting domains within the speech-language pathology scope of practice. Communication therapy was provided during highly valued and meaningful hands-on learning experiences, referred to as the Garden Club, which incorporated all aspects of planting and caring for a garden as well as related journaling, sensory, cooking, art, and technology-based activities. Direct care staff and an undergraduate research assistant were trained by SLP to be impactful language guides during their interactions with participants in the Garden Club. SLP also provided direct therapy and modeling during Garden Club. Research methods used in this study included a mixed methods analysis of a literature review, a quasi-experimental implementation of communication therapy in the context of experiential learning activities, Quality of Life participant surveys, quantitative pre- post- data collection and linear mixed model analysis, qualitative data collection with qualitative content analysis and coding for themes. Outcomes indicated overall positive changes in expressive vocabulary, following multi-step directions, sequencing, problem-solving, planning, skills for building and maintaining meaningful social relationships, and participant perception of the Garden Project’s impact on their own quality of life. Implementation of this project also highlighted supports and barriers that must be taken into consideration when planning similar projects. Overall findings support the use of experiential learning projects in day habilitation programs for adults with IDD, as well as additional research to deepen understanding of best practices, supports, and barriers for implementation of experiential learning with this population. This research provides an important contribution to research in the fields of speech-language pathology and other professions serving adults with IDD by describing an interprofessional experiential learning program with positive outcomes for executive function, language learning, and quality of life.

Keywords: experiential learning, adults, intellectual and developmental disabilities, expressive language, receptive language, pragmatic language, executive function, communication therapy, day habilitation, interprofessionalism, quality of life

Procedia PDF Downloads 127
596 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 144