Search results for: Bulk Viscous Bianchi Type V Cosmological Model
652 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves
Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann
Abstract:
Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves
Procedia PDF Downloads 44651 Neuroprotection against N-Methyl-D-Aspartate-Induced Optic Nerve and Retinal Degeneration Changes by Philanthotoxin-343 to Alleviate Visual Impairments Involve Reduced Nitrosative Stress
Authors: Izuddin Fahmy Abu, Mohamad Haiqal Nizar Mohamad, Muhammad Fattah Fazel, Renu Agarwal, Igor Iezhitsa, Nor Salmah Bakar, Henrik Franzyk, Ian Mellor
Abstract:
Glaucoma is the global leading cause of irreversible blindness. Currently, the available treatment strategy only involves lowering intraocular pressure (IOP); however, the condition often progresses despite lowered or normal IOP in some patients. N-methyl-D-aspartate receptor (NMDAR) excitotoxicity often occurs in neurodegeneration-related glaucoma; thus it is a relevant target to develop a therapy based on neuroprotection approach. This study investigated the effects of Philanthotoxin-343 (PhTX-343), an NMDAR antagonist, on the neuroprotection of NMDA-induced glaucoma to alleviate visual impairments. Male Sprague-Dawley rats were equally divided: Groups 1 (control) and 2 (glaucoma) were intravitreally injected with phosphate buffer saline (PBS) and NMDA (160nM), respectively, while group 3 was pre-treated with PhTX-343 (160nM) 24 hours prior to NMDA injection. Seven days post-treatments, rats were subjected to visual behavior assessments and subsequently euthanized to harvest their retina and optic nerve tissues for histological analysis and determination of nitrosative stress level using 3-nitrotyrosine ELISA. Visual behavior assessments via open field, object, and color recognition tests demonstrated poor visual performance in glaucoma rats indicated by high exploratory behavior. PhTX-343 pre-treatment appeared to preserve visual abilities as all test results were significantly improved (p < 0.05). H&E staining of the retina showed a marked reduction of ganglion cell layer thickness in the glaucoma group; in contrast, PhTX-343 significantly increased the number by 1.28-folds (p < 0.05). PhTX-343 also increased the number of cell nuclei/100μm2 within inner retina by 1.82-folds compared to the glaucoma group (p < 0.05). Toluidine blue staining of optic nerve tissues showed that PhTX-343 reduced the degeneration changes compared to the glaucoma group which exhibited vacuolation overall sections. PhTX-343 also decreased retinal 3- nitrotyrosine concentration by 1.74-folds compared to the glaucoma group (p < 0.05). All results in PhTX-343 group were comparable to control (p > 0.05). We conclude that PhTX-343 protects against NMDA-induced changes and visual impairments in the rat model by reducing nitrosative stress levels.Keywords: excitotoxicity, glaucoma, nitrosative stress , NMDA receptor , N-methyl-D-aspartate , philanthotoxin, visual behaviour
Procedia PDF Downloads 135650 Considering International/Local Peacebuilding Partnerships: The Stoplights Analysis System
Authors: Charles Davidson
Abstract:
This paper presents the Stoplight Analysis System of Partnering Organizations Readiness, offering a structured framework to evaluate conflict resolution collaboration feasibility, especially crucial in conflict areas, employing a colour-coded approach and specific assessment points, with implications for more informed decision-making and improved outcomes in peacebuilding initiatives. Derived from at total of 40 years of practical peacebuilding experience from the project’s two researchers as well as interviews of various other peacebuilding actors, this paper introduces the Stoplight Analysis System of Partnering Organizations Readiness, a comprehensive framework designed to facilitate effective collaboration in international/local peacebuilding partnerships by evaluating the readiness of both potential partner organisations and the location of the proposed project. ^The system employs a colour-coded approach, categorising potential partnerships into three distinct indicators: Red (no-go), Yellow (requires further research), and Green (promising, go ahead). Within each category, specific points are identified for assessment, guiding decision-makers in evaluating the feasibility and potential success of collaboration. The Red category signals significant barriers, prompting an immediate stoppage in the consideration of partnership. The Yellow category encourages deeper investigation to determine whether potential issues can be mitigated, while the Green category signifies organisations deemed ready for collaboration. This systematic and structured approach empowers decision-makers to make informed choices, enhancing the likelihood of successful and mutually beneficial partnerships. Methodologically, this paper utilised interviews from peacebuilders from around the globe, scholarly research of extant strategies, and a collaborative review of programming from the project’s two authors from their own time in the field. This method as a formalised model has been employed for the past two years across a litany of partnership considerations, and has been adjusted according to its field experimentation. This research holds significant importance in the field of conflict resolution as it provides a systematic and structured approach to peacebuilding partnership evaluation. In conflict-affected regions, where the dynamics are complex and challenging, the Stoplight Analysis System offers decision-makers a practical tool to assess the readiness of partnering organisations. This approach can enhance the efficiency of conflict resolution efforts by ensuring that resources are directed towards partnerships with a higher likelihood of success, ultimately contributing to more effective and sustainable peacebuilding outcomes.Keywords: collaboration, conflict resolution, partnerships, peacebuilding
Procedia PDF Downloads 63649 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 158648 Performance Evaluation of Fingerprint, Auto-Pin and Password-Based Security Systems in Cloud Computing Environment
Authors: Emmanuel Ogala
Abstract:
Cloud computing has been envisioned as the next-generation architecture of Information Technology (IT) enterprise. In contrast to traditional solutions where IT services are under physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the management of the data and services may not be fully trustworthy. This is due to the fact that the systems are opened to the whole world and as people tries to have access into the system, many people also are there trying day-in day-out on having unauthorized access into the system. This research contributes to the improvement of cloud computing security for better operation. The work is motivated by two problems: first, the observed easy access to cloud computing resources and complexity of attacks to vital cloud computing data system NIC requires that dynamic security mechanism evolves to stay capable of preventing illegitimate access. Second; lack of good methodology for performance test and evaluation of biometric security algorithms for securing records in cloud computing environment. The aim of this research was to evaluate the performance of an integrated security system (ISS) for securing exams records in cloud computing environment. In this research, we designed and implemented an ISS consisting of three security mechanisms of biometric (fingerprint), auto-PIN and password into one stream of access control and used for securing examination records in Kogi State University, Anyigba. Conclusively, the system we built has been able to overcome guessing abilities of hackers who guesses people password or pin. We are certain about this because the added security system (fingerprint) needs the presence of the user of the software before a login access can be granted. This is based on the placement of his finger on the fingerprint biometrics scanner for capturing and verification purpose for user’s authenticity confirmation. The study adopted the conceptual of quantitative design. Object oriented and design methodology was adopted. In the analysis and design, PHP, HTML5, CSS, Visual Studio Java Script, and web 2.0 technologies were used to implement the model of ISS for cloud computing environment. Note; PHP, HTML5, CSS were used in conjunction with visual Studio front end engine design tools and MySQL + Access 7.0 were used for the backend engine and Java Script was used for object arrangement and also validation of user input for security check. Finally, the performance of the developed framework was evaluated by comparing with two other existing security systems (Auto-PIN and password) within the school and the results showed that the developed approach (fingerprint) allows overcoming the two main weaknesses of the existing systems and will work perfectly well if fully implemented.Keywords: performance evaluation, fingerprint, auto-pin, password-based, security systems, cloud computing environment
Procedia PDF Downloads 139647 Civic E-Participation in Central and Eastern Europe: A Comparative Analysis
Authors: Izabela Kapsa
Abstract:
Civic participation is an important aspect of democracy. The contemporary model of democracy is based on citizens' participation in political decision-making (deliberative democracy, participatory democracy). This participation takes many forms of activities like display of slogans and symbols, voting, social consultations, political demonstrations, membership in political parties or organizing civil disobedience. The countries of Central and Eastern Europe after 1989 are characterized by great social, economic and political diversity. Civil society is also part of the process of democratization. Civil society, funded by the rule of law, civil rights, such as freedom of speech and association and private ownership, was to play a central role in the development of liberal democracy. Among the many interpretations of concepts, defining the concept of contemporary democracy, one can assume that the terms civil society and democracy, although different in meaning, nowadays overlap. In the post-communist countries, the process of shaping and maturing societies took place in the context of a struggle with a state governed by undemocratic power. State fraud or repudiation of the institution is a representative state, which in the past was the only way to manifest and defend its identity, but after the breakthrough became one of the main obstacles to the development of civil society. In Central and Eastern Europe, there are many obstacles to the development of civil society, for example, the elimination of economic poverty, the implementation of educational campaigns, consciousness-related obstacles, the formation of social capital and the deficit of social activity. Obviously, civil society does not only entail an electoral turnout but a broader participation in the decision-making process, which is impossible without direct and participative democratic institutions. This article considers such broad forms of civic participation and their characteristics in Central and Eastern Europe. The paper is attempts to analyze the functioning of electronic forms of civic participation in Central and Eastern European states. This is not accompanied by a referendum or a referendum initiative, and other forms of political participation, such as public consultations, participative budgets, or e-Government. However, this paper will broadly present electronic administration tools, the application of which results from both legal regulations and increasingly common practice in state and city management. In the comparative analysis, the experiences of post-communist bloc countries will be summed up to indicate the challenges and possible goals for further development of this form of citizen participation in the political process. The author argues that for to function efficiently and effectively, states need to involve their citizens in the political decision-making process, especially with the use of electronic tools.Keywords: Central and Eastern Europe, e-participation, e-government, post-communism
Procedia PDF Downloads 193646 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142645 A Vaccination Program to Control an Outbreak of Acute Hepatitis A among MSM in Taiwan, 2016
Authors: Ying-Jung Hsieh, Angela S. Huang, Chu-Ming Chiu, Yu-Min Chou, Chin-Hui Yang
Abstract:
Background and Objectives: Hepatitis A is primarily acquired by the fecal-oral route through person-to-person contact or ingestion of contaminated food or water. During 2010 to 2014, an average of 83 cases of locally-acquired disease was reported to Taiwan’s notifiable disease system. Taiwan Centers for Disease Control (TCDC) identified an outbreak of acute hepatitis A which began in June 2015. Of the 126 cases reported in 2015, 103 (82%) cases were reported during June–December and 95 cases (92%) of them were male. The average age of all male cases was 31 years (median, 29 years; range, 15–76 years). Among the 95 male cases, 49 (52%) were also infected with HIV, and all reported to have had sex with other men. To control this outbreak, TCDC launched a free hepatitis A vaccination program in January 2016 for close contacts of confirmed hepatitis A cases, including family members, sexual partners, and household contacts. Effect of the vaccination program was evaluated. Methods: All cases of hepatitis A reported to the National Notifiable Disease Surveillance System were included. A case of hepatitis A was defined as a locally-acquired disease in a person who had acute clinical symptoms include fever, malaise, loss of appetite, nausea or abdominal discomfort compatible with hepatitis, and tested positive for anti-HAV IgM during June 2015 to June 2016 in Taiwan. The rate of case accumulation was calculated using a simple regression model. Results: During January–June 2016, there were 466 cases of hepatitis A reported; of the 243 (52%) who were also infected with HIV, 232 (95%) had a history of having sex with men. Of the 346 cases that were followed up, 259 (75%) provided information on contacts but only 14 (5%) of them provided the name of their sexual partners. Among the 602 contacts reported, 349 (58%) were family members, 14 (2%) were sexual partners, and 239 (40%) were other household contacts. Among the 602 contacts eligible for free hepatitis A vaccination, 440 (73%) received the vaccine. There were 87 (25%) cases that refused to disclose their close contacts. The average case accumulation rate during January–June 2016 was 21.7 cases per month, which was 6.8 times compared to the average case accumulation rate during June–December 2015 of 3.2 cases per month. Conclusions: Despite vaccination program aimed to provide free hepatitis A vaccine to close contacts of hepatitis A patients, the outbreak continued and even gained momentum in transmission. Refusal by hepatitis A patients to provide names of their close contacts and rejection of contacts to take the hepatitis A vaccine may have contributed to the poor effect of the program. Targeted vaccination efforts of all MSM may be needed to control the outbreak among this population in the short term. In the long term, universal vaccination program is needed to prevent the infection of hepatitis A.Keywords: hepatitis A, HIV, men who have sex with men, vaccination
Procedia PDF Downloads 255644 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 98643 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality
Authors: Qian Yi Ooi
Abstract:
At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality
Procedia PDF Downloads 220642 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport
Authors: Surupa Shaw, Debjyoti Banerjee
Abstract:
Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement
Procedia PDF Downloads 318641 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 396640 DIF-JACKET: a Thermal Protective Jacket for Firefighters
Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves
Abstract:
Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.Keywords: firefighters, multilayer system, phase change material, thermal protective clothing
Procedia PDF Downloads 162639 Factors Affecting Air Surface Temperature Variations in the Philippines
Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya
Abstract:
Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number
Procedia PDF Downloads 321638 Public Participation for an Effective Flood Risk Management: Building Social Capacities in Ribera Alta Del Ebro, Spain
Authors: Alba Ballester Ciuró, Marc Pares Franzi
Abstract:
While coming decades are likely to see a higher flood risk in Europe and greater socio-economic damages, traditional flood risk management has become inefficient. In response to that, new approaches such as capacity building and public participation have recently been incorporated in natural hazards mitigation policy (i.e. Sendai Framework for Action, Intergovernmental Panel on Climate Change reports and EU Floods Directive). By integrating capacity building and public participation, we present a research concerning the promotion of participatory social capacity building actions for flood risk mitigation at the local level. Social capacities have been defined as the resources and abilities available at individual and collective level that can be used to anticipate, respond to, cope with, recover from and adapt to external stressors. Social capacity building is understood as a process of identifying communities’ social capacities and of applying collaborative strategies to improve them. This paper presents a proposal of systematization of participatory social capacity building process for flood risk mitigation, and its implementation in a high risk of flooding area in the Ebro river basin: Ribera Alta del Ebro. To develop this process, we designed and tested a tool that allows measuring and building five types of social capacities: knowledge, motivation, networks, participation and finance. The tool implementation has allowed us to assess social capacities in the area. Upon the results of the assessment we have developed a co-decision process with stakeholders and flood risk management authorities on which participatory activities could be employed to improve social capacities for flood risk mitigation. Based on the results of this process, and focused on the weaker social capacities, we developed a set of participatory actions in the area oriented to general public and stakeholders: informative sessions on flood risk management plan and flood insurances, interpretative river descents on flood risk management (with journalists, teachers, and general public), interpretative visit to the floodplain, workshop on agricultural insurance, deliberative workshop on project funding, deliberative workshops in schools on flood risk management (playing with a flood risk model). The combination of obtaining data through a mixed-methods approach of qualitative inquiry and quantitative surveys, as well as action research through co-decision processes and pilot participatory activities, show us the significant impact of public participation on social capacity building for flood risk mitigation and contributes to the understanding of which main factors intervene in this process.Keywords: flood risk management, public participation, risk reduction, social capacities, vulnerability assessment
Procedia PDF Downloads 210637 Evidence-Triggers for Care of Patients with Cleft Lip and Palate in Srinagarind Hospital: The Tawanchai Center and Out-Patients Surgical Room
Authors: Suteera Pradubwong, Pattama Surit, Sumalee Pongpagatip, Tharinee Pethchara, Bowornsilp Chowchuen
Abstract:
Background: Cleft lip and palate (CLP) is a congenital anomaly of the lip and palate that is caused by several factors. It was found in approximately one per 500 to 550 live births depending on nationality and socioeconomic status. The Tawanchai Center and out-patients surgical room of Srinagarind Hospital are responsible for providing care to patients with CLP (starting from birth to adolescent) and their caregivers. From the observations and interviews with nurses working in these units, they reported that both patients and their caregivers confronted many problems which affected their physical and mental health. Based on the Soukup’s model (2000), the researchers used evidence triggers from clinical practice (practice triggers) and related literature (knowledge triggers) to investigate the problems. Objective: The purpose of this study was to investigate the problems of care for patients with CLP in the Tawanchai Center and out-patient surgical room of Srinagarind Hospital. Material and Method: The descriptive method was used in this study. For practice triggers, the researchers obtained the data from medical records of ten patients with CLP and from interviewing two patients with CLP, eight caregivers, two nurses, and two assistant workers. Instruments for the interview consisted of a demographic data form and a semi-structured questionnaire. For knowledge triggers, the researchers used a literature search. The data from both practice and knowledge triggers were collected between February and May 2016. The quantitative data were analyzed through frequency and percentage distributions, and the qualitative data were analyzed through a content analysis. Results: The problems of care gained from practice and knowledge triggers were consistent and were identified as holistic issues, including 1) insufficient feeding, 2) risks of respiratory tract infections and physical disorders, 3) psychological problems, such as anxiety, stress, and distress, 4) socioeconomic problems, such as stigmatization, isolation, and loss of income, 5)spiritual problems, such as low self-esteem and low quality of life, 6) school absence and learning limitation, 7) lack of knowledge about CLP and its treatments, 8) misunderstanding towards roles among the multidisciplinary team, 9) no available services, and 10) shortage of healthcare professionals, especially speech-language pathologists (SLPs). Conclusion: From evidence-triggers, the problems of care affect the patients and their caregivers holistically. Integrated long-term care by the multidisciplinary team is needed for children with CLP starting from birth to adolescent. Nurses should provide effective care to these patients and their caregivers by using a holistic approach and working collaboratively with other healthcare providers in the multidisciplinary team.Keywords: evidence-triggers, cleft lip, cleft palate, problems of care
Procedia PDF Downloads 217636 Strategic Entrepreneurship: Model Proposal for Post-Troika Sustainable Cultural Organizations
Authors: Maria Inês Pinho
Abstract:
Recent literature on issues of Cultural Management (also called Strategic Management for cultural organizations) systematically seeks for models that allow such equipment to adapt to the constant change that occurs in contemporary societies. In the last decade, the world, and in particular Europe has experienced a serious financial problem that has triggered defensive mechanisms, both in the direction of promoting the balance of public accounts and in the sense of the anonymous loss of the democratic and cultural values of each nation. If in the first case emerged the Troika that led to strong cuts in funding for Culture, deeply affecting those organizations; in the second case, the commonplace citizen is seen fighting for the non-closure of cultural equipment. Despite this, the cultural manager argues that there is no single formula capable of solving the need to adapt to change. In another way, it is up to this agent to know the existing scientific models and to adapt them in the best way to the reality of the institution he coordinates. These actions, as a rule, are concerned with the best performance vis-à-vis external audiences or with the financial sustainability of cultural organizations. They forget, therefore, that all this mechanics cannot function without its internal public, without its Human Resources. The employees of the cultural organization must then have an entrepreneurial posture - must be intrapreneurial. This paper intends to break this form of action and lead the cultural manager to understand that his role should be in the sense of creating value for society, through a good organizational performance. This is only possible with a posture of strategic entrepreneurship. In other words, with a link between: Cultural Management, Cultural Entrepreneurship and Cultural Intrapreneurship. In order to prove this assumption, the case study methodology was used with the symbol of the European Capital of Culture (Casa da Música) as well as qualitative and quantitative techniques. The qualitative techniques included the procedure of in-depth interviews to managers, founders and patrons and focus groups to public with and without experience in managing cultural facilities. The quantitative techniques involved the application of a questionnaire to middle management and employees of Casa da Música. After the triangulation of the data, it was proved that contemporary management of cultural organizations must implement among its practices, the concept of Strategic Entrepreneurship and its variables. Also, the topics which characterize the Cultural Intrapreneurship notion (job satisfaction, the quality in organizational performance, the leadership and the employee engagement and autonomy) emerged. The findings show then that to be sustainable, a cultural organization should meet the concerns of both external and internal forum. In other words, it should have an attitude of citizenship to the communities, visible on a social responsibility and a participatory management, only possible with the implementation of the concept of Strategic Entrepreneurship and its variable of Cultural Intrapreneurship.Keywords: cultural entrepreneurship, cultural intrapreneurship, cultural organizations, strategic management
Procedia PDF Downloads 182635 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending
Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim
Abstract:
Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions
Procedia PDF Downloads 103634 Collaborative Governance to Foster Public Good: The Case of the Etorkizuna Eraikiz Initiative
Authors: Igone Guerra, Xabier Barandiaran
Abstract:
The deep crisis (economic, social and cultural) in which Europe and Gipuzkoa, in the Basque Country (Spain), have been immersed in since 2008 forces governments to face a necessary transformation. These challenges demand different solutions and answers to meet the needs of the citizens. Adapting to continuous and sometimes abrupt changes in the social and political landscape requires an undeniable will to reinvent the way in which governments practice politics. This reinvention of government should help us build different organizations that, first, develop challenging public services, second, respond effectively to the needs of the citizens, and third, manage scarce resources, ultimately offering a contemporary concept of public value. In this context, the Etorkizuna Eraikiz initiative was designed to face the future challenges of the territory in a collaborative way. The aim of the initiative is to promote an alternative form of governance to generate common good and greater public value. In Etorkizuna Eraikiz democratic values, such as collaboration, participation, and accountability are prominent. This government approach is based on several features such as the creation of relational spaces to design and deliberate about the public politics or the promotion of a team-working approach, breaking down the silos between and within organizations, as an exercise in defining a shared vision regarding the Future of the Territory. A future in which the citizens are becoming actors in the problem-solving process and in the construction of a culture of participation and collective learning. In this paper, the Etorkizuna Eraikiz initiative will be presented (vision and methodology) as a model of a local approach to public policy innovation resulting in a way of governance that is more open and collaborative. Based on this case study, this paper explores the way in which collaborative governance leads to better decisions, better leadership, and better citizenry. Finally, the paper also describes some preliminary findings of this local approach, such as the level of knowledge of the citizenry about the projects promoted within Etorkizuna Eraikiz as well as the link between the challenges of the territory, as identified by the citizenry, and the political agenda promoted by the provincial government. Regarding the former, the Survey on the socio-political situation of Gipuzkoa showed that 27.9% of the respondents confirmed that they knew about the projects promoted within the initiative and gave it a mark of 5.71. In connection with the latter, over the last three years, 65 millions of euros have been allocated for a total of 73 projects that have covered socio-economic and political challenges such as aging, climate change, mobility, participation in democratic life, and so on. This governance approach of Etorkizuna Eraikiz has allowed the local government to match the needs of citizens to the political agenda fostering in this way a shared vision about the public value.Keywords: collaborative governance, citizen participation, public good, social listening, public innovation
Procedia PDF Downloads 138633 Forging A Distinct Understanding of Implicit Bias
Authors: Benjamin D Reese Jr
Abstract:
Implicit bias is understood as unconscious attitudes, stereotypes, or associations that can influence the cognitions, actions, decisions, and interactions of an individual without intentional control. These unconscious attitudes or stereotypes are often targeted toward specific groups of people based on their gender, race, age, perceived sexual orientation or other social categories. Since the late 1980s, there has been a proliferation of research that hypothesizes that the operation of implicit bias is the result of the brain needing to process millions of bits of information every second. Hence, one’s prior individual learning history provides ‘shortcuts’. As soon as one see someone of a certain race, one have immediate associations based on their past learning, and one might make assumptions about their competence, skill, or danger. These assumptions are outside of conscious awareness. In recent years, an alternative conceptualization has been proposed. The ‘bias of crowds’ theory hypothesizes that a given context or situation influences the degree of accessibility of particular biases. For example, in certain geographic communities in the United States, there is a long-standing and deeply ingrained history of structures, policies, and practices that contribute to racial inequities and bias toward African Americans. Hence, negative biases among groups of people towards African Americans are more accessible in such contexts or communities. This theory does not focus on individual brain functioning or cognitive ‘shortcuts.’ Therefore, attempts to modify individual perceptions or learning might have negligible impact on those embedded environmental systems or policies that are within certain contexts or communities. From the ‘bias of crowds’ perspective, high levels of racial bias in a community can be reduced by making fundamental changes in structures, policies, and practices to create a more equitable context or community rather than focusing on training or education aimed at reducing an individual’s biases. The current paper acknowledges and supports the foundational role of long-standing structures, policies, and practices that maintain racial inequities, as well as inequities related to other social categories, and highlights the critical need to continue organizational, community, and national efforts to eliminate those inequities. It also makes a case for providing individual leaders with a deep understanding of the dynamics of how implicit biases impact cognitions, actions, decisions, and interactions so that those leaders might more effectively develop structural changes in the processes and systems under their purview. This approach incorporates both the importance of an individual’s learning history as well as the important variables within the ‘bias of crowds’ theory. The paper also offers a model for leadership education, as well as examples of structural changes leaders might consider.Keywords: implicit bias, unconscious bias, bias, inequities
Procedia PDF Downloads 3632 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles
Authors: Antonina A. Shumakova, Sergey A. Khotimchenko
Abstract:
The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.Keywords: absorption, cadmium, engineered nanomaterials, lead
Procedia PDF Downloads 86631 Reframing Physical Activity for Health
Authors: M. Roberts
Abstract:
We Are Undefeatable - is a mass marketing behaviour change campaign that aims to support the least active people living with long term health conditions to be more active. This is an important issue to address because people with long term conditions are an historically underserved community for the sport and physical activity sector and the least active of those with long term conditions have the most to gain in health and wellbeing benefits. The campaign has generated a significant change in the way physical activity is communicated and people with long term conditions are represented in the media and marketing. The goal is to create a social norm around being active. The campaign is led by a unique partnership of organisations: the Richmond Group of Charities (made up of Age UK, Alzheimer’s Society, Asthma + Lung UK, Breast Cancer Now, British Heart Foundation, British Red Cross, Diabetes UK, Macmillan Cancer Support, Rethink Mental Illness, Royal Voluntary Service, Stroke Association, Versus Arthritis) along with Mind, MS Society, Parkinson’s UK and Sport England, with National Lottery Funding. It is underpinned by the COM-B model of behaviour change. It draws on the lived experience of people with multiple long term conditions to shape the look and feel of the campaign and all the resources available. People with long term conditions are the campaign messengers, central to the ethos of the campaign by telling their individual stories of overcoming barriers to be active with their health conditions. The central messaging is about finding a way to be active that works for the individual. We Are Undefeatable is evaluated through a multi-modal approach, including regular qualitative focus groups and a quantitative evaluation tracker undertaken three times a year. The campaign has highlighted the significant barriers to physical activity for people with long term conditions. This has changed the way our partnership talks about physical activity but has also had an impact on the wider sport and physical activity sector, prompting an increasing departure from traditional messaging and marketing approaches for this audience of people with long term conditions. The campaign has reached millions of people since its launch in 2019, through multiple marketing and partnership channels including primetime TV advertising and promotion through health professionals and in health settings. Its diverse storytellers make it relatable to its target audience and the achievable activities highlighted and inclusive messaging inspire our audience to take action as a result of seeing the campaign. The We Are Undefeatable campaign is a blueprint for physical activity campaigns; it not only addresses individual behaviour change but plays a role in addressing systemic barriers to physical activity by sharing the lived experience insight to shape policy and professional practice.Keywords: behaviour change, long term conditions, partnership, relatable
Procedia PDF Downloads 65630 Sustainable Mining Fulfilling Constitutional Responsibilities: A Case Study of NMDC Limited Bacheli in India
Authors: Bagam Venkateswarlu
Abstract:
NMDC Limited, Indian multinational mining company operates under administrative control of Ministry of Steel, Government of India. This study is undertaken to evaluate how sustainable mining practiced by the company fulfils the provisions of Indian Constitution to secure to its citizen – justice, equality of status and opportunity, promoting social, economic, political, and religious wellbeing. The Constitution of India lays down a road map as to how the goal of being a “Welfare State” shall be achieved. The vision of sustainable mining being practiced is oriented along the constitutional responsibilities on Indian Citizens and the Corporate World. This qualitative study shall be backed by quantitative studies of National Mineral Development Corporation performances in various domains of sustainable mining and ESG, that is, environment, social and governance parameters. For example, Five Star Rating of mine is a comprehensive evaluation system introduced by Ministry of Mines, Govt. of India is one of the methodologies. Corporate Social Responsibilities is one of the thrust areas for securing social well-being. Green energy initiatives in and around the mines has given the title of “Eco-Friendly Miner” to NMDC Limited. While operating fully mechanized large scale iron ore mine (18.8 million tonne per annum capacity) in Bacheli, Chhattisgarh, M/s NMDC Limited caters to the needs of mineral security of State of Chhattisgarh and Indian Union. It preserves forest, wild-life, and environment heritage of richly endowed State of Chhattisgarh. In the remote and far-flung interiors of Chhattisgarh, NMDC empowers the local population by providing world class educational & medical facilities, transportation network, drinking water facilities, irrigational agricultural supports, employment opportunities, establishing religious harmony. All this ultimately results in empowered, educated, and improved awareness in population. Thus, the basic tenets of constitution of India- secularism, democracy, welfare for all, socialism, humanism, decentralization, liberalism, mixed economy, and non-violence is fulfilled. Constitution declares India as a welfare state – for the people, of the people and by the people. The sustainable mining practices by NMDC are in line with the objective. Thus, the purpose of study is fully met with. The potential benefit of the study includes replicating this model in existing or new establishments in various parts of country – especially in the under-privileged interiors and far-flung areas which are yet to see the lights of development.Keywords: ESG values, Indian constitution, NMDC limited, sustainable mining, CSR, green energy
Procedia PDF Downloads 75629 Evaluation of Suspended Particles Impact on Condensation in Expanding Flow with Aerodynamics Waves
Authors: Piotr Wisniewski, Sławomir Dykas
Abstract:
Condensation has a negative impact on turbomachinery efficiency in many energy processes.In technical applications, it is often impossible to dry the working fluid at the nozzle inlet. One of the most popular working fluid is atmospheric air that always contains water in form of steam, liquid, or ice crystals. Moreover, it always contains some amount of suspended particles which influence the phase change process. It is known that the phenomena of evaporation or condensation are connected with release or absorption of latent heat, what influence the fluid physical properties and might affect the machinery efficiency therefore, the phase transition has to be taken under account. This researchpresents an attempt to evaluate the impact of solid and liquid particles suspended in the air on the expansion of moist air in a low expansion rate, i.e., with expansion rate, P≈1000s⁻¹. The numerical study supported by analytical and experimental research is presented in this work. The experimental study was carried out using an in-house experimental test rig, where nozzle was examined for different inlet air relative humidity values included in the range of 25 to 51%. The nozzle was tested for a supersonic flow as well as for flow with shock waves induced by elevated back pressure. The Schlieren photography technique and measurement of static pressure on the nozzle wall were used for qualitative identification of both condensation and shock waves. A numerical model validated against experimental data available in the literature was used for analysis of occurring flow phenomena. The analysis of the suspended particles number, diameter, and character (solid or liquid) revealed their connection with heterogeneous condensation importance. If the expansion of fluid without suspended particlesis considered, the condensation triggers so called condensation wave that appears downstream the nozzle throat. If the solid particles are considered, with increasing number of them, the condensation triggers upwind the nozzle throat, decreasing the condensation wave strength. Due to the release of latent heat during condensation, the fluid temperature and pressure increase, leading to the shift of normal shock upstream the flow. Owing relatively large diameters of the droplets created during heterogeneous condensation, they evaporate partially on the shock and continues to evaporate downstream the nozzle. If the liquid water particles are considered, due to their larger radius, their do not affect the expanding flow significantly, however might be in major importance while considering the compression phenomena as they will tend to evaporate on the shock wave. This research proves the need of further study of phase change phenomena in supersonic flow especially considering the interaction of droplets with the aerodynamic waves in the flow.Keywords: aerodynamics, computational fluid dynamics, condensation, moist air, multi-phase flows
Procedia PDF Downloads 116628 Legal Considerations in Fashion Modeling: Protecting Models' Rights and Ensuring Ethical Practices
Authors: Fatemeh Noori
Abstract:
The fashion industry is a dynamic and ever-evolving realm that continuously shapes societal perceptions of beauty and style. Within this industry, fashion modeling plays a crucial role, acting as the visual representation of brands and designers. However, behind the glamorous façade lies a complex web of legal considerations that govern the rights, responsibilities, and ethical practices within the field. This paper aims to explore the legal landscape surrounding fashion modeling, shedding light on key issues such as contract law, intellectual property, labor rights, and the increasing importance of ethical considerations in the industry. Fashion modeling involves the collaboration of various stakeholders, including models, designers, agencies, and photographers. To ensure a fair and transparent working environment, it is imperative to establish a comprehensive legal framework that addresses the rights and obligations of each party involved. One of the primary legal considerations in fashion modeling is the contractual relationship between models and agencies. Contracts define the terms of engagement, including payment, working conditions, and the scope of services. This section will delve into the essential elements of modeling contracts, the negotiation process, and the importance of clarity to avoid disputes. Models are not just individuals showcasing clothing; they are integral to the creation and dissemination of artistic and commercial content. Intellectual property rights, including image rights and the use of a model's likeness, are critical aspects of the legal landscape. This section will explore the protection of models' image rights, the use of their likeness in advertising, and the potential for unauthorized use. Models, like any other professionals, are entitled to fair and ethical treatment. This section will address issues such as working conditions, hours, and the responsibility of agencies and designers to prioritize the well-being of models. Additionally, it will explore the global movement toward inclusivity, diversity, and the promotion of positive body image within the industry. The fashion industry has faced scrutiny for perpetuating harmful standards of beauty and fostering a culture of exploitation. This section will discuss the ethical responsibilities of all stakeholders, including the promotion of diversity, the prevention of exploitation, and the role of models as influencers for positive change. In conclusion, the legal considerations in fashion modeling are multifaceted, requiring a comprehensive approach to protect the rights of models and ensure ethical practices within the industry. By understanding and addressing these legal aspects, the fashion industry can create a more transparent, fair, and inclusive environment for all stakeholders involved in the art of modeling.Keywords: fashion modeling contracts, image rights in modeling, labor rights for models, ethical practices in fashion, diversity and inclusivity in modeling
Procedia PDF Downloads 74627 CO₂ Recovery from Biogas and Successful Upgrading to Food-Grade Quality: A Case Study
Authors: Elisa Esposito, Johannes C. Jansen, Loredana Dellamuzia, Ugo Moretti, Lidietta Giorno
Abstract:
The reduction of CO₂ emission into the atmosphere as a result of human activity is one of the most important environmental challenges to face in the next decennia. Emission of CO₂, related to the use of fossil fuels, is believed to be one of the main causes of global warming and climate change. In this scenario, the production of biomethane from organic waste, as a renewable energy source, is one of the most promising strategies to reduce fossil fuel consumption and greenhouse gas emission. Unfortunately, biogas upgrading still produces the greenhouse gas CO₂ as a waste product. Therefore, this work presents a case study on biogas upgrading, aimed at the simultaneous purification of methane and CO₂ via different steps, including CO₂/methane separation by polymeric membranes. The original objective of the project was the biogas upgrading to distribution grid quality methane, but the innovative aspect of this case study is the further purification of the captured CO₂, transforming it from a useless by-product to a pure gas with food-grade quality, suitable for commercial application in the food and beverage industry. The study was performed on a pilot plant constructed by Tecno Project Industriale Srl (TPI) Italy. This is a model of one of the largest biogas production and purification plants. The full-scale anaerobic digestion plant (Montello Spa, North Italy), has a digestive capacity of 400.000 ton of biomass/year and can treat 6.250 m3/hour of biogas from FORSU (organic fraction of solid urban waste). The entire upgrading process consists of a number of purifications steps: 1. Dehydration of the raw biogas by condensation. 2. Removal of trace impurities such as H₂S via absorption. 3.Separation of CO₂ and methane via a membrane separation process. 4. Removal of trace impurities from CO₂. The gas separation with polymeric membranes guarantees complete simultaneous removal of microorganisms. The chemical purity of the different process streams was analysed by a certified laboratory and was compared with the guidelines of the European Industrial Gases Association and the International Society of Beverage Technologists (EIGA/ISBT) for CO₂ used in the food industry. The microbiological purity was compared with the limit values defined in the European Collaborative Action. With a purity of 96-99 vol%, the purified methane respects the legal requirements for the household network. At the same time, the CO₂ reaches a purity of > 98.1% before, and 99.9% after the final distillation process. According to the EIGA/ISBT guidelines, the CO₂ proves to be chemically and microbiologically sufficiently pure to be suitable for food-grade applications.Keywords: biogas, CO₂ separation, CO2 utilization, CO₂ food grade
Procedia PDF Downloads 211626 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India
Authors: Bharti Singh, Shri K. Singh
Abstract:
Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.Keywords: child nutrition, India, NFHS, women’s empowerment
Procedia PDF Downloads 32625 Documentary Project as an Active Learning Strategy in a Developmental Psychology Course
Authors: Ozge Gurcanli
Abstract:
Recent studies in active-learning focus on how student experience varies based on the content (e.g. STEM versus Humanities) and the medium (e.g. in-class exercises versus off-campus activities) of experiential learning. However, little is known whether the variation in classroom time and space within the same active learning context affects student experience. This study manipulated the use of classroom time for the active learning component of a developmental psychology course that is offered at a four-year university in the South-West Region of United States. The course uses a blended model: traditional and active learning. In the traditional learning component of the course, students do weekly readings, listen to lectures, and take midterms. In the active learning component, students make a documentary on a developmental topic as a final project. Students used the classroom time and space for the documentary in two ways: regular classroom time slots that were dedicated to the making of the documentary outside without the supervision of the professor (Classroom-time Outside) and lectures that offered basic instructions about how to make a documentary (Documentary Lectures). The study used the public teaching evaluations that are administered by the Office of Registrar’s. A total of two hundred and seven student evaluations were available across six semesters. Because the Office of Registrar’s presented the data separately without personal identifiers, One-Way ANOVA with four groups (Traditional, Experiential-Heavy: 19% Classroom-time Outside, 12% for Documentary Lectures, Experiential-Moderate: 5-7% for Classroom-time Outside, 16-19% for Documentary Lectures, Experiential Light: 4-7% for Classroom-time Outside, 7% for Documentary Lectures) was conducted on five key features (Organization, Quality, Assignments Contribution, Intellectual Curiosity, Teaching Effectiveness). Each measure used a five-point reverse-coded scale (1-Outstanding, 5-Poor). For all experiential conditions, the documentary counted towards 30% of the final grade. Organization (‘The instructors preparation for class was’), Quality (’Overall, I would rate the quality of this course as’) and Assignment Contribution (’The contribution of the graded work that made to the learning experience was’) did not yield any significant differences across four course types (F (3, 202)=1.72, p > .05, F(3, 200)=.32, p > .05, F(3, 203)=.43, p > .05, respectively). Intellectual Curiosity (’The instructor’s ability to stimulate intellectual curiosity was’) yielded a marginal effect (F (3, 201)=2.61, p = .053). Tukey’s HSD (p < .05) indicated that the Experiential-Heavy (M = 1.94, SD = .82) condition was significantly different than all other three conditions (M =1.57, 1.51, 1.58; SD = .68, .66, .77, respectively) showing that heavily active class-time did not elicit intellectual curiosity as much as others. Finally, Teaching Effectiveness (’Overall, I feel that the instructor’s effectiveness as a teacher was’) was significant (F (3, 198)=3.32, p <.05). Tukey’s HSD (p <.05) showed that students found the courses with moderate (M=1.49, SD=.62) to light (M=1.52, SD=.70) active class-time more effective than heavily active class-time (M=1.93, SD=.69). Overall, the findings of this study suggest that within the same active learning context, the time and the space dedicated to active learning results in different outcomes in intellectual curiosity and teaching effectiveness.Keywords: active learning, learning outcomes, student experience, learning context
Procedia PDF Downloads 190624 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 129623 Exploring Instructional Designs on the Socio-Scientific Issues-Based Learning Method in Respect to STEM Education for Measuring Reasonable Ethics on Electromagnetic Wave through Science Attitudes toward Physics
Authors: Adisorn Banhan, Toansakul Santiboon, Prasong Saihong
Abstract:
Using the Socio-Scientific Issues-Based Learning Method is to compare of the blended instruction of STEM education with a sample consisted of 84 students in 2 classes at the 11th grade level in Sarakham Pittayakhom School. The 2-instructional models were managed of five instructional lesson plans in the context of electronic wave issue. These research procedures were designed of each instructional method through two groups, the 40-experimental student group was designed for the instructional STEM education (STEMe) and 40-controlling student group was administered with the Socio-Scientific Issues-Based Learning (SSIBL) methods. Associations between students’ learning achievements of each instructional method and their science attitudes of their predictions to their exploring activities toward physics with the STEMe and SSIBL methods were compared. The Measuring Reasonable Ethics Test (MRET) was assessed students’ reasonable ethics with the STEMe and SSIBL instructional design methods on two each group. Using the pretest and posttest technique to monitor and evaluate students’ performances of their reasonable ethics on electromagnetic wave issue in the STEMe and SSIBL instructional classes were examined. Students were observed and gained experience with the phenomena being studied with the Socio-Scientific Issues-Based Learning method Model. To support with the STEM that it was not just teaching about Science, Technology, Engineering, and Mathematics; it is a culture that needs to be cultivated to help create a problem solving, creative, critical thinking workforce for tomorrow in physics. Students’ attitudes were assessed with the Test Of Physics-Related Attitude (TOPRA) modified from the original Test Of Science-Related Attitude (TOSRA). Comparisons between students’ learning achievements of their different instructional methods on the STEMe and SSIBL were analyzed. Associations between students’ performances the STEMe and SSIBL instructional design methods of their reasonable ethics and their science attitudes toward physics were associated. These findings have found that the efficiency of the SSIBL and the STEMe innovations were based on criteria of the IOC value higher than evidence as 80/80 standard level. Statistically significant of students’ learning achievements to their later outcomes on the controlling and experimental groups with the SSIBL and STEMe were differentiated between students’ learning achievements at the .05 level. To compare between students’ reasonable ethics with the SSIBL and STEMe of students’ responses to their instructional activities in the STEMe is higher than the SSIBL instructional methods. Associations between students’ later learning achievements with the SSIBL and STEMe, the predictive efficiency values of the R2 indicate that 67% and 75% for the SSIBL, and indicate that 74% and 81% for the STEMe of the variances were attributable to their developing reasonable ethics and science attitudes toward physics, consequently.Keywords: socio-scientific issues-based learning method, STEM education, science attitudes, measurement, reasonable ethics, physics classes
Procedia PDF Downloads 291