Search results for: Paolo Biancone
42 Cost-Benefit Analysis for the Optimization of Noise Abatement Treatments at the Workplace
Authors: Paolo Lenzuni
Abstract:
Cost-effectiveness of noise abatement treatments at the workplace has not yet received adequate consideration. Furthermore, most of the published work is focused on productivity, despite the poor correlation of this quantity with noise levels. There is currently no tool to estimate the social benefit associated to a specific noise abatement treatment, and no comparison among different options is accordingly possible. In this paper, we present an algorithm which has been developed to predict the cost-effectiveness of any planned noise control treatment in a workplace. This algorithm is based the estimates of hearing threshold shifts included in ISO 1999, and on compensations that workers are entitled to once their work-related hearing impairments have been certified. The benefits of a noise abatement treatment are estimated by means of the lower compensation costs which are paid to the impaired workers. Although such benefits have no real meaning in strictly monetary terms, they allow a reliable comparison between different treatments, since actual social costs can be assumed to be proportional to compensation costs. The existing European legislation on occupational exposure to noise it mandates that the noise exposure level be reduced below the upper action limit (85 dBA). There is accordingly little or no motivation for employers to sustain the extra costs required to lower the noise exposure below the lower action limit (80 dBA). In order to make this goal more appealing for employers, the algorithm proposed in this work also includes an ad-hoc element that promotes actions which bring the noise exposure down below 80 dBA. The algorithm has a twofold potential: 1) it can be used as a quality index to promote cost-effective practices; 2) it can be added to the existing criteria used by workers’ compensation authorities to evaluate the cost-effectiveness of technical actions, and support dedicated employers.Keywords: cost-effectiveness, noise, occupational exposure, treatment
Procedia PDF Downloads 31741 Theoretical Analysis and Design Consideration of Screened Heat Pipes for Low-Medium Concentration Solar Receivers
Authors: Davoud Jafari, Paolo Di Marco, Alessandro Franco, Sauro Filippeschi
Abstract:
This paper summarizes the results of an investigation into the heat pipe heat transfer for solar collector applications. The study aims to show the feasibility of a concentrating solar collector, which is coupled with a heat pipe. Particular emphasis is placed on the capillary and boiling limits in capillary porous structures, with different mesh numbers and wick thicknesses. A mathematical model of a cylindrical heat pipe is applied to study its behaviour when it is exposed to higher heat input at the evaporator. The steady state analytical model includes two-dimensional heat conduction in the HP’s wall, the liquid flow in the wick and vapor hydrodynamics. A sensitivity analysis was conducted by considering different design criteria and working conditions. Different wicks (mesh 50, 100, 150, 200, 250, and, 300), different porosities (0.5, 0.6, 0.7, 0.8, and 0.9) with different wick thicknesses (0.25, 0.5, 1, 1.5, and 2 mm) are analyzed with water as a working fluid. Results show that it is possible to improve heat transfer capability (HTC) of a HP by selecting the appropriate wick thickness, the effective pore radius, and lengths for a given HP configuration, and there exist optimal design criteria (optimal thick, evaporator adiabatic and condenser sections). It is shown that the boiling and wicking limits are connected and occurs in dependence on each other. As different parts of the HP external surface collect different fractions of the total incoming insolation, the analysis of non-uniform heat flux distribution indicates that peak heat flux is not affecting parameter. The parametric investigations are aimed to determine working limits and thermal performance of HP for medium temperature SC application.Keywords: screened heat pipes, analytical model, boiling and capillary limits, concentrating collector
Procedia PDF Downloads 55840 Shape Memory Alloy Structural Damper Manufactured by Selective Laser Melting
Authors: Tiziana Biasutti, Daniela Rigamonti, Lorenzo Palmiotti, Adelaide Nespoli, Paolo Bettini
Abstract:
Aerospace industry is based on the continuous development of new technologies and solutions that allows constant improvement of the systems. Shape Memory Alloys are smart materials that can be used as dampers due to their pseudoelastic effect. The purpose of the research was to design a passive damper in Nitinol, manufactured by Selective Laser Melting, for space applications to reduce vibration between different structural parts in space structures. The powder is NiTi (50.2 at.% of Ni). The structure manufactured by additive technology allows us to eliminate the presence of joint and moving parts and to have a compact solution with high structural strength. The designed dampers had single or double cell structures with three different internal angles (30°, 45° and 60°). This particular shape has damping properties also without the pseudoelastic effect. For this reason, the geometries were reproduced in different materials, SS316L and Ti6Al4V, to test the geometry loss factor. The mechanical performances of these specimens were compared to the ones of NiTi structures, pointing out good damping properties of the designed structure and the highest performances of the NiTi pseudoelastic effect. The NiTi damper was mechanically characterized by static and dynamic tests and with DSC and microscope observations. The experimental results were verified with numerical models and with some scaled steel specimens in which optical fibers were embedded. The realized structure presented good mechanical and damping properties. It was observed that the loss factor and the dissipated energy increased with the angles of the cells.Keywords: additive manufacturing, damper, nitinol, pseudo elastic effect, selective laser melting, shape memory alloys
Procedia PDF Downloads 10439 Turin, from Factory City to Talents Power Player: The Role of Private Philanthropy Agents of Innovation in the Revolution of Human Capital Market in the Contemporary Socio-Urban Scenario
Authors: Renato Roda
Abstract:
With the emergence of the so-called 'Knowledge Society', the implementation of policies to attract, grow and retain talents, in an academic context as well, has become critical –both in the perspective of didactics and research and as far as administration and institutional management are concerned. At the same time, the contemporary philanthropic entities/organizations, which are evolving from traditional types of social support towards new styles of aid, envisaged to go beyond mere monetary donations, face the challenge of brand-new forms of complexity in supporting such specific dynamics of the global human capital market. In this sense, it becomes unavoidable for the philanthropic foundation, while carrying out their daily charitable tasks, to resort to innovative ways to facilitate the acquisition and the promotion of talents by academic and research institutions. In order to deepen such a specific perspective, this paper features the case of Turin, former 'factory city' of Italy’s North West, headquarters -and main reference territory- of Italy’s largest and richest private formerly bank-based philanthropic foundation, the Fondazione Compagnia di San Paolo. While it was assessed and classified as 'medium' in the city Global Talent Competitiveness Index (GTCI) of 2020, Turin has nevertheless acquired over the past months status of impact laboratory for a whole series of innovation strategies in the competition for the acquisition of excellence human capital. Leading actors of this new city vision are the foundations with their specifically adjusted financial engagement and a consistent role of stimulus towards innovation for research and education institutions.Keywords: human capital, post-Fordism, private foundation, war on talents
Procedia PDF Downloads 17038 Advancement in Scour Protection with Flexible Solutions: Interpretation of Hydraulic Tests Data for Reno Mattresses in Open Channel Flow
Authors: Paolo Di Pietro, Matteo Lelli, Kinjal Parmar
Abstract:
Water hazards are consistently identified as among the highest global risks in terms of impact. Riverbank protection plays a key role in flood risk management. For erosion control and scour protection, flexible solutions like gabions & mattresses are being used since quite some time now. The efficacy of erosion control systems depends both on the ability to prevent soil loss underneath, as well as to maintain their integrity under the effects of the water flow. The paper presents the results of a research carried out at the Colorado State University on the performance of double twisted wire mesh products, known as Reno Mattresses, used as soil erosion control system. Mattresses were subjected to various flow conditions on a 10m long flume where they were placed on a 0.30 m thick soil layer. The performance against erosion was evaluated by assessing the effect of the stone motion inside the mattress combined with the condition of incipient soil erosion underneath, in relationship to the mattress thickness, the filling stone properties and under variable hydraulic flow regimes. While confirming the stability obtained using a conventional design approach (commonly referred to tractive force theories), the results of the research allowed to introduce a new performance limit based on incipient soil erosion underneath the revetment. Based on the research results, the authors propose to express the shear resistance of mattresses used as soil erosion control system as a function of the size of the filling stones, their uniformity, their unit weight, the thickness of the mattress, and the presence of vertical connecting elements between the mattress lid and bottom.Keywords: Reno Mattress, riverbank protection, hydraulics, full scale tests
Procedia PDF Downloads 2337 Electrochemotherapy of Portal Vein Tumor Thrombus as Dowstaging to Liver Transplantation
Authors: Luciano Tarantino, Emanuele Balzano, Paolo Tarantino, Riccardo Aurelio Nasto, Aurelio Nasto
Abstract:
Liver transplantation (OLT) is contraindicate in Portal Vein tumor Thrombosis (PVTT) from Hepatocellular Carcinoma at hepatic hilum(pH-HCC) Surgery,Thermal ablation and chemotherapy show poorer outcomes Electrochemotherapy (ECT) has been successfully used in patients with pH-HCC with PVTT. We report the results of ECT as downstaging aimed to definitive cure by OLT. F.P. 53 years HBV related Cirrhosis Child-Pugh B7 class; EGDS F2 aesophageal Varices. Diabetes. April 2016 : Enhanced Computed Tomography (CT) detected HCC(n.3 nodules in VII-VIII-VI;diameter range=25 cm) and PVTT of right portal vein. The patient was considered ineligible for OLT. May 2016: first ablation session with percutaneous Radiofrequency-ablation(RFA) of 3 HCC-nodules . August 2016: second ablation session with ECT of PVTT. CT october 2016: disappearance of PVTT and patent right portal vein. No intraparenchymal recurrence. CT march 2017: No recurrence in portal vein and in the left lobe. local recurrence in the VII-VIII segments. May 2017 : transarterial chemoembolization (TACE) of right lobe recurrences. CT October 2017: patent right portal vein. No recurrence. The patient was reconsidered for OLT. He underwent OLT in April 2018. At 36-months follow-up , no intrahepatic recurrence of HCC occurred. March 2021: enhanced CT and PET/CT detected a single small nodule (1.5 cm) uptaking tracer in the left upper pulmonary lobe, no hepatic recurrence . CT-guided FNB showed metastasis from HCC . June 2021: left lung upper lobectomy . At the current time the patient is alive and recurrence-free at 64 months follow-up. ECT Could be aneffective technique as pre-OLT dowstaging in HCC with PVTT.Keywords: liver tumor ablation, interventional ultrasound, electrochemotherapy, liver transplantation
Procedia PDF Downloads 11636 Carbon, Nitrogen Doped TiO2 Macro/Mesoporous Monoliths with High Visible Light Absorption for Photocatalytic Wastewater Treatment
Authors: Paolo Boscaro, Vasile Hulea, François Fajula, Francis Luck, Anne Galarneau
Abstract:
TiO2 based monoliths with hierarchical macropores and mesopores have been synthesized following a novel one pot sol-gel synthesis method. Taking advantage of spinodal separation that occurs between titanium isopropoxide and an acidic solution in presence of polyethylene oxide polymer, monoliths with homogeneous interconnected macropres of 3 μm in diameter and mesopores of ca. 6 nm (surface area 150 m2/g) are obtained. Furthermore, these monoliths present some carbon and nitrogen (as shown by XPS and elemental analysis), which considerably reduce titanium oxide energy gap and enable light to be absorbed up to 700 nm wavelength. XRD shows that anatase is the dominant phase with a small amount of brookite. Enhanced light absorption and high porosity of the monoliths are responsible for a remarkable photocatalytic activity. Wastewater treatment has been performed in closed reactor under sunlight using orange G dye as target molecule. Glass reactors guarantee that most of UV radiations (to almost 300 nm) of solar spectrum are excluded. TiO2 nanoparticles P25 (usually used in photocatalysis under UV) and un-doped TiO2 monoliths with similar porosity were used as comparison. C,N-doped TiO2 monolith allowed a complete colorant degradation in less than 1 hour, whereas 10 h are necessary for 40% colorant degradation with P25 and un-doped monolith. Experiment performed in the dark shows that only 3% of molecules have been adsorbed in the C,N-doped TiO2 monolith within 1 hour. The much higher efficiency of C,N-doped TiO2 monolith in comparison to P25 and un-doped monolith, proves that doping TiO2 is an essential issue and that nitrogen and carbon are effective dopants. Monoliths offer multiples advantages in respect to nanometric powders: sample can be easily removed from batch (no needs to filter or to centrifuge). Moreover flow reactions can be set up with cylindrical or flat monoliths by simple sheathing or by locking them with O-rings.Keywords: C-N doped, sunlight photocatalytic activity, TiO2 monolith, visible absorbance
Procedia PDF Downloads 22935 River Habitat Modeling for the Entire Macroinvertebrate Community
Authors: Pinna Beatrice., Laini Alex, Negro Giovanni, Burgazzi Gemma, Viaroli Pierluigi, Vezza Paolo
Abstract:
Habitat models rarely consider macroinvertebrates as ecological targets in rivers. Available approaches mainly focus on single macroinvertebrate species, not addressing the ecological needs and functionality of the entire community. This research aimed to provide an approach to model the habitat of the macroinvertebrate community. The approach is based on the recently developed Flow-T index, together with a Random Forest (RF) regression, which is employed to apply the Flow-T index at the meso-habitat scale. Using different datasets gathered from both field data collection and 2D hydrodynamic simulations, the model has been calibrated in the Trebbia river (2019 campaign), and then validated in the Trebbia, Taro, and Enza rivers (2020 campaign). The three rivers are characterized by a braiding morphology, gravel riverbeds, and summer low flows. The RF model selected 12 mesohabitat descriptors as important for the macroinvertebrate community. These descriptors belong to different frequency classes of water depth, flow velocity, substrate grain size, and connectivity to the main river channel. The cross-validation R² coefficient (R²𝒸ᵥ) of the training dataset is 0.71 for the Trebbia River (2019), whereas the R² coefficient for the validation datasets (Trebbia, Taro, and Enza Rivers 2020) is 0.63. The agreement between the simulated results and the experimental data shows sufficient accuracy and reliability. The outcomes of the study reveal that the model can identify the ecological response of the macroinvertebrate community to possible flow regime alterations and to possible river morphological modifications. Lastly, the proposed approach allows extending the MesoHABSIM methodology, widely used for the fish habitat assessment, to a different ecological target community. Further applications of the approach can be related to flow design in both perennial and non-perennial rivers, including river reaches in which fish fauna is absent.Keywords: ecological flows, macroinvertebrate community, mesohabitat, river habitat modeling
Procedia PDF Downloads 9334 Tobacco Taxation and the Heterogeneity of Smokers' Responses to Price Increases
Authors: Simone Tedeschi, Francesco Crespi, Paolo Liberati, Massimo Paradiso, Antonio Sciala
Abstract:
This paper aims at contributing to the understanding of smokers’ responses to cigarette prices increases with a focus on heterogeneity, both across individuals and price levels. To do this, a stated preference quasi-experimental design grounded in a random utility framework is proposed to evaluate the effect on smokers’ utility of the price level and variation, along with social conditioning and health impact perception. The analysis is based on individual-level data drawn from a unique survey gathering very detailed information on Italian smokers’ habits. In particular, qualitative information on the individual reactions triggered by changes in prices of different magnitude and composition are exploited. The main findings stemming from the analysis are the following; the average price elasticity of cigarette consumption is comparable with previous estimates for advanced economies (-.32). However, the decomposition of this result across five latent-classes of smokers, reveals extreme heterogeneity in terms of price responsiveness, implying a potential price elasticity that ranges between 0.05 to almost 1. Such heterogeneity is in part explained by observable characteristics such as age, income, gender, education as well as (current and lagged) smoking intensity. Moreover, price responsiveness is far from being independent from the size of the prospected price increase. Finally, by comparing even and uneven price variations, it is shown that uniform across-brand price increases are able to limit the scope of product substitutions and downgrade. Estimated price-response heterogeneity has significant implications for tax policy. Among them, first, it provides evidence and a rationale for why the aggregate price elasticity is likely to follow a strictly increasing pattern as a function of the experienced price variation. This information is crucial for forecasting the effect of a given tax-driven price change on tax revenue. Second, it provides some guidance on how to design excise tax reforms to balance public health and revenue goals.Keywords: smoking behaviour, preference heterogeneity, price responsiveness, cigarette taxation, random utility models
Procedia PDF Downloads 16233 Comparison between the Roller-Foam and Neuromuscular Facilitation Stretching on Flexibility of Hamstrings Muscles
Authors: Paolo Ragazzi, Olivier Peillon, Paul Fauris, Mathias Simon, Raul Navarro, Juan Carlos Martin, Oriol Casasayas, Laura Pacheco, Albert Perez-Bellmunt
Abstract:
Introduction: The use of stretching techniques in the sports world is frequent and widely used for its many effects. One of the main benefits is the gain in flexibility, range of motion and facilitation of the sporting performance. Recently the use of Roller-Foam (RF) has spread in sports practice both at elite and recreational level for its benefits being similar to those observed in stretching. The objective of the following study is to compare the results of the Roller-Foam with the proprioceptive neuromuscular facilitation stretching (PNF) (one of the stretchings with more evidence) on the hamstring muscles. Study design: The design of the study is a single-blind, randomized controlled trial and the participants are 40 healthy volunteers. Intervention: The subjects are distributed randomly in one of the following groups; stretching (PNF) intervention group: 4 repetitions of PNF stretching (5seconds of contraction, 5 second of relaxation, 20 second stretch), Roller-Foam intervention group: 2 minutes of Roller-Foam was realized on the hamstring muscles. Main outcome measures: hamstring muscles flexibility was assessed at the beginning, during (30’’ of intervention) and the end of the session by using the Modified Sit and Reach test (MSR). Results: The baseline results data given in both groups are comparable to each other. The PNF group obtained an increase in flexibility of 3,1 cm at 30 seconds (first series) and of 5,1 cm at 2 minutes (the last of all series). The RF group obtained a 0,6 cm difference at 30 seconds and 2,4 cm after 2 minutes of application of roller foam. The results were statistically significant when comparing intragroups but not intergroups. Conclusions: Despite the fact that the use of roller foam is spreading in the sports and rehabilitation field, the results of the present study suggest that the gain of flexibility on the hamstrings is greater if PNF type stretches are used instead of RF. These results may be due to the fact that the use of roller foam intervened more in the fascial tissue, while the stretches intervene more in the myotendinous unit. Future studies are needed, increasing the sample number and diversifying the types of stretching.Keywords: hamstring muscle, stretching, neuromuscular facilitation stretching, roller foam
Procedia PDF Downloads 18532 Enhancing Green Infrastructure as a Climate Change Adaptation Strategy in Addis Ababa: Unlocking Institutional, Socio-Cultural and Cognitive Barriers for Application
Authors: Eyasu Markos Woldesemayat, Paolo Vincenzo Genovese
Abstract:
In recent years with an increase in the concentration of Green House Gases (GHG), Climate Change (CC) externalities are mounting, regardless of governments, are scrambling to implement mitigation and adaptation measures. With multiple social, economic and environmental benefits, Green Infrastructure (GI) has evolved as a highly valuable policy tool to promote sustainable development and smart growth by meeting multiple objectives towards quality of life. However, despite the wide range of benefits, it's uptake in African cities such as Addis Ababa is very low due to several constraining factors. This study, through content analysis and key informant interviews, examined barriers for the uptake of GI among spatial planners in Addis Ababa. Added to this, the study has revealed that the spatial planners had insufficient knowledge about GI planning principles such as multi-functionality, integration, and connectivity, and multiscale. The practice of implementing these holistic principles in urban spatial planning is phenomenally nonexistent. The findings also revealed 20 barriers categorized under four themes, i.e., institutional, socio-cultural, resource, and cognitive barriers. Similarly, it was identified that institutional barriers (0.756), socio-cultural barriers (0.730), cognitive barriers (0.700) and resource barriers (0.642), respectively, are the foremost impending factors for the promotion of GI in Addis Ababa. It was realized that resource barriers were the least constraining factor for enshrining the GI uptake in the city. Strategies to hasten the adoption of GI in the city mainly focus on improving political will, harmonization sectorial plans, improve spatial planning and implementation practice, prioritization of GI in all planning activities, enforcement of environmental laws, introducing collaborative GI governance, creating strong and stable institutions and raising awareness on the need to conserve environment and CC externalities through education and outreach mechanisms.Keywords: Addis Ababa, climate change, green infrastructure, spatial planning, spatial planners
Procedia PDF Downloads 12031 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 38130 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications
Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso
Abstract:
The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.Keywords: interferometry, MIMO RADAR, SAR, tomography
Procedia PDF Downloads 19329 Choosing the Green Energy Option: A Willingness to Pay Study of Metro Manila Residents for Solar Renewable Energy
Authors: Paolo Magnata
Abstract:
The energy market in the Philippines remains to have one of the highest electricity rates in the region averaging at US$0.16/kWh (PHP6.89/kWh), excluding VAT, as opposed to the overall energy market average of US$0.13/kWh. The movement towards renewable energy, specifically solar energy, will pose as an expensive one with the country’s energy sector providing Feed-in-Tariff rates as high as US$0.17/kWh (PHP8.69/kWh) for solar energy power plants. Increasing the share of renewables at the current state of the energy regulatory background would yield a three-fold increase in residential electricity bills. The issue lies in the uniform charge that consumers bear regardless of where the electricity is sourced resulting in rates that only consider costs and not the consumers. But if they are given the option to choose where their electricity comes from, a number of consumers may potentially choose economically costlier sources of electricity due to higher levels of utility coupled with the willingness to pay of consuming environmentally-friendly sourced electricity. A contingent valuation survey was conducted to determine their willingness-to-pay for solar energy on a sample that was representative of Metro Manila to elicit their willingness-to-pay and a Single Bounded Dichotomous Choice and Double Bounded Dichotomous Choice analysis was used to estimate the amount they were willing to pay. The results showed that Metro Manila residents are willing to pay a premium on top of their current electricity bill amounting to US$5.71 (PHP268.42) – US$9.26 (PHP435.37) per month which is approximately 0.97% - 1.29% of their monthly household income. It was also discovered that besides higher income of households, a higher level of self-perceived knowledge on environmental awareness significantly affected the likelihood of a consumer to pay the premium. Shifting towards renewable energy is an expensive move not only for the government because of high capital investment but also to consumers; however, the Green Energy Option (a policy mechanism which gives consumers the option to decide where their electricity comes from) can potentially balance the shift of the economic burden by transitioning from a uniformly charged electricity rate to equitably charging consumers based on their willingness to pay for renewably sourced energy.Keywords: contingent valuation, dichotomous choice, Philippines, solar energy
Procedia PDF Downloads 34128 Participatory Action Research for Strengthening Health Systems: A Freirian Critique from a Community Based Study Conducted in the Northern Areas of Pakistan
Authors: Sohail Bawani, Kausar S. Khan, Rozina Karmaliani, Shehnaz Mir
Abstract:
Action research (AR) is one of the types of health systems research (HSR), and participatory action research (PAR) is known for being effective in health systems strengthening (HSS). The current literature on PAR for HSS cites numerous examples and case studies that led to improve health services; build child health information system; increase knowledge and awareness of people about health problems, and identify pathways for institutional and policy change by engaging people in research. But examples of marginalized communities being agents of change in health governance are not common in health systems research (HSR). This approach to PAR is at the heart of Paolo Freire’s Social Transformation Theory and Critical Consciousness building, which was used to design a community-based PAR study in the Northern/mountainous areas of Pakistan. The purpose of the study was to understand the place and role of marginalized communities in strengthening existing health governance structure (health facility and village health committees and health boards) by taking marginalized communities as partners. Community meetings were carried out to identify who is living at the social, political, cultural and economic margins in 40 different villages. Participatory reflection and analysis (PRA) tools were used during the meeting to facilitate identification. Focus group discussions were conducted with marginalized groups using PRA tools and family ethnographies with marginalized families identified through group discussions. Findings of the study revealed that for the marginalized health systems constitute more than just delivery of health services, but it also embraces social determinants that surround systems and its governance. The paper argues that from Frerian perspective people’s participation should not only be limited to knowledge generation. People must be seen active users of the knowledge that they generate for achieving better health outcomes that they want to achieve in the time to come. PAR provides a pathway to the marginalized in playing a role in health governance. The study dissemination planned shall engage the marginalized in a dialogue with service providers so that together a role for the marginalized can be outlined.Keywords: participatory action research, health systems, marginalized, health services
Procedia PDF Downloads 28327 An Anthropological Reading of the Italian Shockumentary Mondo Cane: Whiteness Made Visible and Racial Discourses
Authors: Claudia Pisano
Abstract:
The Italian shockumentary Mondo cane (1962), directed by Gualtiero Jacopetti, Paolo Cavara, and Franco Prosperi, has often been criticized for its supposed racist and colonialist stances. Several critics consider it a film that proclaims, without explicitly mentioning it, the superiority of the white Euro-American individual over the people who do not belong to white-western societies. This paper proposes a different interpretation of the way in which Mondo cane engages with the discourse of race. Through an analysis of crucial scenes and of the relationship between images and voice-over, and through a comparison between the representation of non-white societies in Mondo cane and in some popular Italian newsreels of the 50s-60s, such as 'La Settimana Incom' and 'Mondo Libero,' the paper argues that Mondo cane debunks the western-white superiority that, according to some critics, the film would promote. The continuous and rapid alternance of scenes set in the western world, for example in Europe or in the United States, and scenes set in exotic countries inhabited by non-white peoples highlights the commonalities between these far-away realities, rather than pointing out the superiority of the white-western one. In addition, the subtle irony employed by the voice-over distances Mondo cane from the newsreels that it much resembles for its documentary style. Mondo cane’s treatment and representation of race is analyzed in the light of the work of Australian Aboriginal anthropologist Aileen Moreton-Robinson, which is based on key concepts such as whiteness and whiteness invisibility. Whiteness is defined as the invisible and omnipresent norm based on which everything that does not belong to the white world is labeled as an odd and inferior 'other.' To overcome racial discrimination, it is necessary to make whiteness visible; that is to say, to deprive it of that aura of normalcy and unquestionable righteousness that surrounds it. This essay argues that Mondo cane participates in the process of making whiteness visible through the confrontation of the white people with the visible 'other'. Because the film shows that the common features on which this confrontation is based are violence and bestiality, the paper suggests that the film does not support the idea of the white world being superior to the non-white; on the contrary, it underlines that the entire world is characterized by the same shocking savagery.Keywords: irony, race, shockumentary, whiteness, whiteness invisibility
Procedia PDF Downloads 12326 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver
Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera
Abstract:
In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids
Procedia PDF Downloads 41625 West Nile Virus in North-Eastern Italy: Overview of Integrated Surveillance Activities
Authors: Laura Amato, Paolo Mulatti, Fabrizio Montarsi, Matteo Mazzucato, Laura Gagliazzo, Michele Brichese, Manlio Palei, Gioia Capelli, Lebana Bonfanti
Abstract:
West Nile virus (WNV) re-emerged in north-eastern Italy in 2008, after ten years from its first appearance in Tuscany. In 2009, a national surveillance programme was implemented, and re-modulated in north-eastern Italy in 2011. Hereby, we present the results of surveillance activities in 2008-2016 in the north-eastern Italian regions, with inferences on WNV epidemiological trend in the area. The re-modulated surveillance programmes aimed at early detecting WNV seasonal reactivation by searching IgM antibodies in horses. In 2013, the surveillance plans were further modified including a risk-based approach. Spatial analysis techniques, including Bernoulli space-time scan-statistics, were applied to the results of 2010–2012 surveillance on mosquitoes, equines, and humans to identify areas where WNV reactivation was more likely to occur. From 2008 to 2016, residential horses tested positive for anti-WNV antibodies on a yearly basis (503 cases), also in areas where WNV circulation was not detected in mosquito populations. Surveillance activities detected 26 syndromic cases in horses, 102 infected mosquito pools and WNV in 18 dead wild birds. Human cases were also recurrently detected in the study area during the surveillance period (68 cases of West Nile neuroinvasive disease). The recurrent identification of WNV in animals, mosquitoes, and humans indicates the virus has likely become endemic in the area. In 2016, findings of WNV positives in horses or mosquitoes were included as triggers for enhancing screening activities in humans. The evolution of the epidemiological situation prompts for continuous and accurate surveillance measures. The results of the 2013-2016 surveillance indicate that the risk-based approach was effective in early detecting seasonal reactivation of WNV, key factor of the integrated surveillance strategy in endemic areas.Keywords: arboviruses, horses, Italy, surveillance, west nile virus, zoonoses
Procedia PDF Downloads 35424 Define Immersive Need Level for Optimal Adoption of Virtual Words with BIM Methodology
Authors: Simone Balin, Cecilia M. Bolognesi, Paolo Borin
Abstract:
In the construction industry, there is a large amount of data and interconnected information. To manage this information effectively, a transition to the immersive digitization of information processes is required. This transition is important to improve knowledge circulation, product quality, production sustainability and user satisfaction. However, there is currently a lack of a common definition of immersion in the construction industry, leading to misunderstandings and limiting the use of advanced immersive technologies. Furthermore, the lack of guidelines and a common vocabulary causes interested actors to abandon the virtual world after the first collaborative steps. This research aims to define the optimal use of immersive technologies in the AEC sector, particularly for collaborative processes based on the BIM methodology. Additionally, the research focuses on creating classes and levels to structure and define guidelines and a vocabulary for the use of the " Immersive Need Level." This concept, matured by recent technological advancements, aims to enable a broader application of state-of-the-art immersive technologies, avoiding misunderstandings, redundancies, or paradoxes. While the concept of "Informational Need Level" has been well clarified with the recent UNI EN 17412-1:2021 standard, when it comes to immersion, current regulations and literature only provide some hints about the technology and related equipment, leaving the procedural approach and the user's free interpretation completely unexplored. Therefore, once the necessary knowledge and information are acquired (Informational Need Level), it is possible to transition to an Immersive Need Level that involves the practical application of the acquired knowledge, exploring scenarios and solutions in a more thorough and detailed manner, with user involvement, via different immersion scales, in the design, construction or management process of a building or infrastructure. The need for information constitutes the basis for acquiring relevant knowledge and information, while the immersive need can manifest itself later, once a solid information base has been solidified, using the senses and developing immersive awareness. This new approach could solve the problem of inertia among AEC industry players in adopting and experimenting with new immersive technologies, expanding collaborative iterations and the range of available options.Keywords: AECindustry, immersive technology (IMT), virtual reality, augmented reality, building information modeling (BIM), decision making, collaborative process, information need level, immersive level of need
Procedia PDF Downloads 9723 Gabriel Marcel and Friedrich Nietzsche: Existence and Death of God
Authors: Paolo Scolari
Abstract:
Nietzschean thought flows like a current throughout Marcel’s philosophy. Marcel is in constant dialogue with him. He wants to give homage to him, making him one of the most eminent representatives of existential thought. His enthusiasm is triggered by Nietzsche’s phrase: ‘God is dead,’ the fil rouge that ties all of the Nietzschean references scattered through marcelian texts. The death of God is the theme which emphasises both the greatness and simultaneously the tragedy of Nietzsche. Marcel wants to substitute the idea ‘God is dead’ with its original meaning: a tragic existential characteristic that imitators of Nietzsche seemed to have blurred. An interpretation that Marcel achieves aiming at double target. On the one hand he removes the heavy metaphysical suit from Nietzsche’s aphorisms on the death of God, that his interpreters have made them wear – Heidegger especially. On the other hand, he removes a stratus of trivialisation which takes the aphorisms out of context and transforms them into advertising slogans – here Sartre becomes the target. In the lecture: Nietzsche: l'homme devant la mort de dieu, Marcel hurls himself against the metaphysical Heidegger interpretation of the death of God. A hermeneutical proposal definitely original, but also a bit too abstract. An interpretation without bite, that does not grasp the tragic existential weight of the original Nietzschean idea. ‘We are probably on the wrong road,’ announces, ‘when at all costs, like Heidegger, we want to make a metaphysic out of Nietzsche.’ Marcel also criticizes Sartre. He lands in Geneva and reacts to the journalists, by saying: ‘Gentlemen, God is dead’. Marcel only needs this impromptu exclamation to understand how Sartre misinterprets the meaning of the death of God. Sartre mistakes and loses the existential sense of this idea in favour of the sensational and trivialisation of it. Marcel then wipes the slate clean from these two limited interpretations of the declaration of the death of God. This is much more than a metaphysical quarrel and not at all comparable to any advertising slogan. Behind the cry ‘God is dead’ there is the existence of an anguished man who experiences in his solitude the actual death of God. A man who has killed God with his own hands, haunted by the chill that from now on he will have to live in a completely different way. The death of God, however, is not the end. Marcel spots a new beginning at the point in which nihilism is overcome and the Übermensch is born. Dialoguing with Nietzsche he notices to being in the presence of a great spirit that has contributed to the renewal of a spiritual horizon. He descends to the most profound depths of his thought, aware that the way out is really far below, in the remotest areas of existence. The ambivalence of Nietzsche does not scare him. Rather such a thought, characterised by contradiction, will simultaneously be infinitely dangerous and infinitely healthy.Keywords: Nietzsche's Death of God, Gabriel Marcel, Heidegger, Sartre
Procedia PDF Downloads 23522 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 21121 Individual Differences in Affective Neuroscience Personality Traits Predict Several Dimensions of Psychological Wellbeing. A Cross-Sectional Study in Healthy Subjects
Authors: Valentina Colonnello, Paolo Maria Russo
Abstract:
Decades of cross-species affective neuroscience research by Panksepp and others have identified basic evolutionarily preserved subcortical emotional systems that humans share with mammals and many vertebrates. These primary emotional systems encode unconditional affective responses and contribute to the development of personality traits throughout ontogenesis and interactions with the environment. The Affective Neuroscience Personality Scale (ANPS) measures individual differences in affective personality traits associated with the basic emotional systems of CARE, PLAY, SEEKING, SADNESS, FEAR, and ANGER, along with Spirituality, which is a more cognitively and socially refined expression of affectivity. Though the ANPS’s power to predict human psychological distress has been documented, to the best of our knowledge, its predictive power for psychological wellbeing has not been explored. This study therefore investigates the relationship between affective neuroscience traits and psychological wellbeing facets. Because the emotional systems are thought to influence cognitively-mediated mental processes about the self and the world, understanding the relationship between affective traits and psychological wellbeing is particularly relevant to understanding the affective dimensions of health. In a cross-sectional study, healthy participants (n = 402) completed the ANPS and the Psychological Wellbeing scale. Multiple regressions revealed that each facet of wellbeing was explained by two to four affective traits, and each trait was significantly related to at least one aspect of wellbeing. Specifically, SEEKING predicted all the wellbeing facets, except for positive relations; CARE predicted personal growth, positive relations, purpose in life, and self-acceptance; PLAY and, inversely, ANGER predicted positive relations; SADNESS inversely predicted autonomy, while FEAR inversely predicted purpose in life. SADNESS and FEAR inversely predicted environmental mastery and self-acceptance. Finally, Spirituality predicted personal growth, positive relations, and self-acceptance. These findings are the first to show the relationship between affective neuroscience personality traits and psychological wellbeing. They also call attention to the distinctive role of FEAR and PANIC traits in psychological wellbeing facets, thereby complementing or even overcoming the traditional personality approach to neuroticism as a global trait.Keywords: affective neuroscience, individual differences, personality, wellbeing
Procedia PDF Downloads 11820 A Cooperative, Autonomous, and Continuously Operating Drone System Offered to Railway and Bridge Industry: The Business Model Behind
Authors: Paolo Guzzini, Emad Samuel M. Ebeid
Abstract:
Bridges and Railways are critical infrastructures. Ensuring safety for transports using such assets is a primary goal as it directly impacts the lives of people. By the way, improving safety could require increased investments in O&M, and therefore optimizing resource usage for asset maintenance becomes crucial. Drones4Safety (D4S), a European project funded under the H2020 Research and Innovation Action (RIA) program, aims to increase the safety of the European civil transport by building a system that relies on 3 main pillars: • Drones operating autonomously in swarm mode; • Drones able to recharge themselves using inductive phenomena produced by transmission lines in the nearby of bridges and railways assets to be inspected; • Data acquired that are analyzed with AI-empowered algorithms for defect detection This paper describes the business model behind this disruptive project. The Business Model is structured in 2 parts: • The first part is focused on the design of the business model Canvas, to explain the value provided by the Drone4safety project; • The second part aims at defining a detailed financial analysis, with the target of calculating the IRR (Internal Return rate) and the NPV (Net Present Value) of the investment in a 7 years plan (2 years to run the project + 5 years post-implementation). As to the financial analysis 2 different points of view are assumed: • Point of view of the Drones4safety company in charge of designing, producing, and selling the new system; • Point of view of the Utility company that will adopt the new system in its O&M practices; Assuming the point of view of the Drones4safety company 3 scenarios were considered: • Selling the drones > revenues will be produced by the drones’ sales; • Renting the drones > revenues will be produced by the rental of the drones (with a time-based model); • Selling the data acquisition service > revenues will be produced by the sales of pictures acquired by drones; Assuming the point of view of a utility adopting the D4S system, a 4th scenario was analyzed taking into account the decremental costs related to the change of operation and maintenance practices. The paper will show, for both companies, what are the key parameters affecting most of the business model and which are the sustainable scenarios.Keywords: a swarm of drones, AI, bridges, railways, drones4safety company, utility companies
Procedia PDF Downloads 13719 Music Listening in Dementia: Current Developments and the Potential for Automated Systems in the Home: Scoping Review and Discussion
Authors: Alexander Street, Nina Wollersberger, Paul Fernie, Leonardo Muller, Ming Hung HSU, Helen Odell-Miller, Jorg Fachner, Patrizia Di Campli San Vito, Stephen Brewster, Hari Shaji, Satvik Venkatesh, Paolo Itaborai, Nicolas Farina, Alexis Kirke, Sube Banerjee, Eduardo Reck Miranda
Abstract:
Escalating neuropsychiatric symptoms (NPS) in people with dementia may lead to earlier care home admission. Music listening has been reported to stimulate cognitive function, potentially reducing agitation in this population. We present a scoping review, reporting on current developments and discussing the potential for music listening with related technology in managing agitation in dementia care. Of two searches for music listening studies, one focused on older people or people living with dementia where music listening interventions, including technology, were delivered in participants’ homes or in institutions to address neuropsychiatric symptoms, quality of life and independence. The second included any population focusing on the use of music technology for health and wellbeing. In search one 70/251 full texts were included. The majority reported either statistical significance (6, 8.5%), significance (17, 24.2%) or improvements (26, 37.1%). Agitation was specifically reported in 36 (51.4%). The second search included 51/99 full texts, reporting improvement (28, 54.9%), significance (11, 21.5%), statistical significance (1, 1.9%) and no difference compared to the control (6, 11.7%). The majority in the first focused on mood and agitation, and the second on mood and psychophysiological responses. Five studies used AI or machine learning systems to select music, all involving healthy controls and reporting benefits. Most studies in both reviews were not conducted in a home environment (review 1 = 12; 17.1%; review 2 = 11; 21.5%). Preferred music listening may help manage NPS in the care home settings. Based on these and other data extracted in the review, a reasonable progression would be to co-design and test music listening systems and protocols for NPS in all settings, including people’s homes. Machine learning and automated technology for music selection and arousal adjustment, driven by live biodata, have not been explored in dementia care. Such approaches may help deliver the right music at the appropriate time in the required dosage, reducing the use of medication and improving quality of life.Keywords: music listening, dementia, agitation, scoping review, technology
Procedia PDF Downloads 11218 The Effect of Physical Guidance on Learning a Tracking Task in Children with Cerebral Palsy
Authors: Elham Azimzadeh, Hamidollah Hassanlouei, Hadi Nobari, Georgian Badicu, Jorge Pérez-Gómez, Luca Paolo Ardigò
Abstract:
Children with cerebral palsy (CP) have weak physical abilities and their limitations may have an effect on performing everyday motor activities. One of the most important and common debilitating factors in CP is the malfunction in the upper extremities to perform motor skills and there is strong evidence that task-specific training may lead to improve general upper limb function among this population. However, augmented feedback enhances the acquisition and learning of a motor task. Practice conditions may alter the difficulty, e.g., the reduced frequency of PG could be more challenging for this population to learn a motor task. So, the purpose of this study was to investigate the effect of physical guidance (PG) on learning a tracking task in children with cerebral palsy (CP). Twenty-five independently ambulant children with spastic hemiplegic CP aged 7-15 years were assigned randomly to five groups. After the pre-test, experimental groups participated in an intervention for eight sessions, 12 trials during each session. The 0% PG group received no PG; the 25% PG group received PG for three trials; the 50% PG group received PG for six trials; the 75% PG group received PG for nine trials; and the 100% PG group, received PG for all 12 trials. PG consisted of placing the experimenter's hand around the children's hand, guiding them to stay on track and complete the task. Learning was inferred by acquisition and delayed retention tests. The tests involved two blocks of 12 trials of the tracking task without any PG being performed by all participants. They were asked to make the movement as accurate as possible (i.e., fewer errors) and the number of total touches (errors) in 24 trials was calculated as the scores of the tests. The results showed that the higher frequency of PG led to more accurate performance during the practice phase. However, the group that received 75% PG had significantly better performance compared to the other groups in the retention phase. It is concluded that the optimal frequency of PG played a critical role in learning a tracking task in children with CP and likely this population may benefit from an optimal level of PG to get the appropriate amount of information confirming the challenge point framework (CPF), which state that too much or too little information will retard learning a motor skill. Therefore, an optimum level of PG may help these children to identify appropriate patterns of motor skill using extrinsic information they receive through PG and improve learning by activating the intrinsic feedback mechanisms.Keywords: cerebral palsy, challenge point framework, motor learning, physical guidance, tracking task
Procedia PDF Downloads 6717 Adaptive Environmental Control System Strategy for Cabin Air Quality in Commercial Aircrafts
Authors: Paolo Grasso, Sai Kalyan Yelike, Federico Benzi, Mathieu Le Cam
Abstract:
The cabin air quality (CAQ) in commercial aircraft is of prime interest, especially in the context of the COVID-19 pandemic. Current Environmental Control Systems (ECS) rely on a prescribed fresh airflow per passenger to dilute contaminants. An adaptive ECS strategy is proposed, leveraging air sensing and filtration technologies to ensure a better CAQ. This paper investigates the CAQ level achieved in commercial aircraft’s cabin during various flight scenarios. The modeling and simulation analysis is performed in a Modelica-based environment describing the dynamic behavior of the system. The model includes the following three main systems: cabin, recirculation loop and air-conditioning pack. The cabin model evaluates the thermo-hygrometric conditions and the air quality in the cabin depending on the number of passengers and crew members, the outdoor conditions and the conditions of the air supplied to the cabin. The recirculation loop includes models of the recirculation fan, ordinary and novel filtration technology, mixing chamber and outflow valve. The air-conditioning pack includes models of heat exchangers and turbomachinery needed to condition the hot pressurized air bled from the engine, as well as selected contaminants originated from the outside or bled from the engine. Different ventilation control strategies are modeled and simulated. Currently, a limited understanding of contaminant concentrations in the cabin and the lack of standardized and systematic methods to collect and record data constitute a challenge in establishing a causal relationship between CAQ and passengers' comfort. As a result, contaminants are neither measured nor filtered during flight, and the current sub-optimal way to avoid their accumulation is their dilution with the fresh air flow. However, the use of a prescribed amount of fresh air comes with a cost, making the ECS the most energy-demanding non-propulsive system within an aircraft. In such a context, this study shows that an ECS based on a reduced and adaptive fresh air flow, and relying on air sensing and filtration technologies, provides promising results in terms of CAQ control. The comparative simulation results demonstrate that the proposed adaptive ECS brings substantial improvements to the CAQ in terms of both controlling the asymptotic values of the concentration of the contaminant and in mitigating hazardous scenarios, such as fume events. Original architectures allowing for adaptive control of the inlet air flow rate based on monitored CAQ will change the requirements for filtration systems and redefine the ECS operation.Keywords: cabin air quality, commercial aircraft, environmental control system, ventilation
Procedia PDF Downloads 9616 Designing Form, Meanings, and Relationships for Future Industrial Products. Case Study Observation of PAD
Authors: Elisabetta Cianfanelli, Margherita Tufarelli, Paolo Pupparo
Abstract:
The dialectical mediation between desires and objects or between mass production and consumption continues to evolve over time. This relationship is influenced both by variable geometries of contexts that are distant from the mere design of product form and by aspects rooted in the very definition of industrial design. In particular, the overcoming of macro-areas of innovation in the technological, social, cultural, formal, and morphological spheres, supported by recent theories in critical and speculative design, seems to be moving further and further away from the design of the formal dimension of advanced products. The articulated fabric of theories and practices that feed the definition of “hyperobjects”, and no longer objects describes a common tension in all areas of design and production of industrial products. The latter are increasingly detached from the design of the form and meaning of the same in mass productions, thus losing the quality of products capable of social transformation. For years we have been living in a transformative moment as regards the design process in the definition of the industrial product. We are faced with a dichotomy in which there is, on the one hand, a reactionary aversion to the new techniques of industrial production and, on the other hand, a sterile adoption of the techniques of mass production that we can now consider traditional. This ambiguity becomes even more evident when we talk about industrial products, and we realize that we are moving further and further away from the concepts of "form" as a synthesis of a design thought aimed at the aesthetic-emotional component as well as the functional one. The design of forms and their contents, as statutes of social acts, allows us to investigate the tension on mass production that crosses seasons, trends, technicalities, and sterile determinisms. The design culture has always determined the formal qualities of objects as a sum of aesthetic characteristics functional and structural relationships that define a product as a coherent unit. The contribution proposes a reflection and a series of practical experiences of research on the form of advanced products. This form is understood as a kaleidoscope of relationships through the search for an identity, the desire for democratization, and between these two, the exploration of the aesthetic factor. The study of form also corresponds to the study of production processes, technological innovations, the definition of standards, distribution, advertising, the vicissitudes of taste and lifestyles. Specifically, we will investigate how the genesis of new forms for new meanings introduces a change in the relative innovative production techniques. It becomes, therefore, fundamental to investigate, through the reflections and the case studies exposed inside the contribution, also the new techniques of production and elaboration of the forms of the products, as new immanent and determining element inside the planning process.Keywords: industrial design, product advanced design, mass productions, new meanings
Procedia PDF Downloads 11915 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 11814 Nanomechanical Characterization of Healthy and Tumor Lung Tissues at Cell and Extracellular Matrix Level
Authors: Valeria Panzetta, Ida Musella, Sabato Fusco, Paolo Antonio Netti
Abstract:
The study of the biophysics of living cells drew attention to the pivotal role of the cytoskeleton in many cell functions, such as mechanics, adhesion, proliferation, migration, differentiation and neoplastic transformation. In particular, during the complex process of malignant transformation and invasion cell cytoskeleton devolves from a rigid and organized structure to a more compliant state, which confers to the cancer cells a great ability to migrate and adapt to the extracellular environment. In order to better understand the malignant transformation process from a mechanical point of view, it is necessary to evaluate the direct crosstalk between the cells and their surrounding extracellular matrix (ECM) in a context which is close to in vivo conditions. In this study, human biopsy tissues of lung adenocarcinoma were analyzed in order to define their mechanical phenotype at cell and ECM level, by using particle tracking microrheology (PTM) technique. Polystyrene beads (500 nm) were introduced into the sample slice. The motion of beads was obtained by tracking their displacements across cell cytoskeleton and ECM structures and mean squared displacements (MSDs) were calculated from bead trajectories. It has been already demonstrated that the amplitude of MSD is inversely related to the mechanical properties of intracellular and extracellular microenvironment. For this reason, MSDs of particles introduced in cytoplasm and ECM of healthy and tumor tissues were compared. PTM analyses showed that cancerous transformation compromises mechanical integrity of cells and extracellular matrix. In particular, the MSD amplitudes in cells of adenocarcinoma were greater as compared to cells of normal tissues. The increased motion is probably associated to a less structured cytoskeleton and consequently to an increase of deformability of cells. Further, cancer transformation is also accompanied by extracellular matrix stiffening, as confirmed by the decrease of MSDs of matrix in tumor tissue, a process that promotes tumor proliferation and invasiveness, by activating typical oncogenic signaling pathways. In addition, a clear correlation between MSDs of cells and tumor grade was found. MSDs increase when tumor grade passes from 2 to 3, indicating that cells undergo to a trans-differentiation process during tumor progression. ECM stiffening is not dependent on tumor grade, but the tumor stage resulted to be strictly correlated with both cells and ECM mechanical properties. In fact, a greater stage is assigned to tumor spread to regional lymph nodes and characterized by an up-regulation of different ECM proteins, such as collagen I fibers. These results indicate that PTM can be used to get nanomechanical characterization at different scale levels in an interpretative and diagnostic context.Keywords: cytoskeleton, extracellular matrix, mechanical properties, particle tracking microrheology, tumor
Procedia PDF Downloads 27813 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection
Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi
Abstract:
During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.Keywords: coating, stainless steel, tribology, wear
Procedia PDF Downloads 148