Search results for: likert scale
480 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids
Authors: Ayalew Yimam Ali
Abstract:
The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.
Procedia PDF Downloads 20479 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics
Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer
Abstract:
Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS
Procedia PDF Downloads 345478 Development and Experimental Evaluation of a Semiactive Friction Damper
Authors: Juan S. Mantilla, Peter Thomson
Abstract:
Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.Keywords: earthquake response, friction damper, semiactive control, shaking table
Procedia PDF Downloads 378477 Smart Architecture and Sustainability in the Built Environment for the Hatay Refugee Camp
Authors: Ali Mohammed Ali Lmbash
Abstract:
The global refugee crisis points to the vital need for sustainable and resistant solutions to different kinds of problems for displaced persons all over the world. Among the myriads of sustainable concerns, however, there are diverse considerations including energy consumption, waste management, water access, and resiliency of structures. Our research aims to develop distinct ideas for sustainable architecture given the exigent problems in disaster-threatened areas starting with the Hatay Refugee camp in Turkey where the majority of the camp dwellers are Syrian refugees. Commencing community-based participatory research which focuses on the socio-environmental issues of displaced populations, this study will apply two approaches with a specific focus on the Hatay region. The initial experiment uses Richter's predictive model and simulations to forecast earthquake outcomes in refugee campers. The result could be useful in implementing architectural design tactics that enhance structural reliability and ensure the security and safety of shelters through earthquakes. In the second experiment a model is generated which helps us in predicting the quality of the existing water sources and since we understand how greatly water is vital for the well-being of humans, we do it. This research aims to enable camp administrators to employ forward-looking practices while managing water resources and thus minimizing health risks as well as building resilience of the refugees in the Hatay area. On the other side, this research assesses other sustainability problems of Hatay Refugee Camp as well. As energy consumption becomes the major issue, housing developers are required to consider energy-efficient designs as well as feasible integration of renewable energy technologies to minimize the environmental impact and improve the long-term sustainability of housing projects. Waste management is given special attention in this case by imposing recycling initiatives and waste reduction measures to reduce the pace of environmental degradation in the camp's land area. As well, study gives an insight into the social and economic reality of the camp, investigating the contribution of initiatives such as urban agriculture or vocational training to the enhancement of livelihood and community empowerment. In a similar fashion, this study combines the latest research with practical experience in order to contribute to the continuing discussion on sustainable architecture during disaster relief, providing recommendations and info that can be adapted on every scale worldwide. Through collaborative efforts and a dedicated sustainability approach, we can jointly get to the root of the cause and work towards a far more robust and equitable society.Keywords: smart architecture, Hatay Camp, sustainability, machine learning.
Procedia PDF Downloads 54476 Implementation of a PDMS Microdevice for the Improved Purification of Circulating MicroRNAs
Authors: G. C. Santini, C. Potrich, L. Lunelli, L. Vanzetti, S. Marasso, M. Cocuzza, C. Pederzolli
Abstract:
The relevance of circulating miRNAs as non-invasive biomarkers for several pathologies is nowadays undoubtedly clear, as they have been found to have both diagnostic and prognostic value able to add fundamental information to patients’ clinical picture. The availability of these data, however, relies on a time-consuming process spanning from the sample collection and processing to the data analysis. In light of this, strategies which are able to ease this procedure are in high demand and considerable effort have been made in developing Lab-on-a-chip (LOC) devices able to speed up and standardise the bench work. In this context, a very promising polydimethylsiloxane (PDMS)-based microdevice which integrates the processing of the biological sample, i.e. purification of extracellular miRNAs, and reverse transcription was previously developed in our lab. In this study, we aimed at the improvement of the miRNA extraction performances of this micro device by increasing the ability of its surface to absorb extracellular miRNAs from biological samples. For this purpose, we focused on the modulation of two properties of the material: roughness and charge. PDMS surface roughness was modulated by casting with several templates (terminated with silicon oxide coated by a thin anti-adhesion aluminum layer), followed by a panel of curing conditions. Atomic force microscopy (AFM) was employed to estimate changes at the nanometric scale. To introduce modifications in surface charge we functionalized PDMS with different mixes of positively charged 3-aminopropyltrimethoxysilanes (APTMS) and neutral poly(ethylene glycol) silane (PEG). The surface chemical composition was characterized by X-ray photoelectron spectroscopy (XPS) and the number of exposed primary amines was quantified with the reagent sulfosuccinimidyl-4-o-(4,4-dimethoxytrityl) butyrate (s-SDTB). As our final end point, the adsorption rate of all these different conditions was assessed by fluorescence microscopy by incubating a synthetic fluorescently-labeled miRNA. Our preliminary analysis identified casting on thermally grown silicon oxide, followed by a curing step at 85°C for 1 hour, as the most efficient technique to obtain a PDMS surface roughness in the nanometric scaleable to trap miRNA. In addition, functionalisation with 0.1% APTMS and 0.9% PEG was found to be a necessary step to significantly increase the amount of microRNA adsorbed on the surface, therefore, available for further steps as on-chip reverse transcription. These findings show a substantial improvement in the extraction efficiency of our PDMS microdevice, ultimately leading to an important step forward in the development of an innovative, easy-to-use and integrated system for the direct purification of less abundant circulating microRNAs.Keywords: circulating miRNAs, diagnostics, Lab-on-a-chip, polydimethylsiloxane (PDMS)
Procedia PDF Downloads 318475 Effects of Prescribed Surface Perturbation on NACA 0012 at Low Reynolds Number
Authors: Diego F. Camacho, Cristian J. Mejia, Carlos Duque-Daza
Abstract:
The recent widespread use of Unmanned Aerial Vehicles (UAVs) has fueled a renewed interest in efficiency and performance of airfoils, particularly for applications at low and moderate Reynolds numbers, typical of this kind of vehicles. Most of previous efforts in the aeronautical industry, regarding aerodynamic efficiency, had been focused on high Reynolds numbers applications, typical of commercial airliners and large size aircrafts. However, in order to increase the levels of efficiency and to boost the performance of these UAV, it is necessary to explore new alternatives in terms of airfoil design and application of drag reduction techniques. The objective of the present work is to carry out the analysis and comparison of performance levels between a standard NACA0012 profile against another one featuring a wall protuberance or surface perturbation. A computational model, based on the finite volume method, is employed to evaluate the effect of the presence of geometrical distortions on the wall. The performance evaluation is achieved in terms of variations of drag and lift coefficients for the given profile. In particular, the aerodynamic performance of the new design, i.e. the airfoil with a surface perturbation, is examined under conditions of incompressible and subsonic flow in transient state. The perturbation considered is a shaped protrusion prescribed as a small surface deformation on the top wall of the aerodynamic profile. The ultimate goal by including such a controlled smooth artificial roughness was to alter the turbulent boundary layer. It is shown in the present work that such a modification has a dramatic impact on the aerodynamic characteristics of the airfoil, and if properly adjusted, in a positive way. The computational model was implemented using the unstructured, FVM-based open source C++ platform OpenFOAM. A number of numerical experiments were carried out at Reynolds number 5x104, based on the length of the chord and the free-stream velocity, and angles of attack 6° and 12°. A Large Eddy Simulation (LES) approach was used, together with the dynamic Smagorinsky approach as subgrid scale (SGS) model, in order to account for the effect of the small turbulent scales. The impact of the surface perturbation on the performance of the airfoil is judged in terms of changes in the drag and lift coefficients, as well as in terms of alterations of the main characteristics of the turbulent boundary layer on the upper wall. A dramatic change in the whole performance can be appreciated, including an arguably large level of lift-to-drag coefficient ratio increase for all angles and a size reduction of laminar separation bubble (LSB) for a twelve-angle-of-attack.Keywords: CFD, LES, Lift-to-drag ratio, LSB, NACA 0012 airfoil
Procedia PDF Downloads 386474 Community Communications and Micro-Level Shifts: The Case of Video Volunteers’ IndiaUnheard Program
Authors: Pooja Ichplani, Archna Kumar, Jessica Mayberry
Abstract:
Community Video (CV) is a participatory medium that has immense potential to strengthen community communications and amplify the voice of people for their empowerment. By building capacities especially of marginalized community groups and providing a platform to freely voice their ideas, CV endeavours to bring about shifts towards more participatory, bottom up development processes and greater power in the hands of the people, especially the disadvantaged. In various parts of the world, among marginalized community groups, community video initiatives have become instrumental in facilitating micro-level, yet significant changes in communities. Video Volunteers (VV) is an organization that promotes community media and works towards providing disadvantaged communities with journalistic, critical thinking and creative skills they need for catalysing change in their communities. Working since 2002, VV has evolved a unique community media model fostering locally-owned and managed media production, as well as building people’s capacities to articulate and share their perspectives on the issues that matter to them – on a local and a global scale. Further, by integrating a livelihood aspect within its model, VV has actively involved people from poor marginalized communities and provided them a new tool for serving their communities whilst keeping their identities intact. This paper, based on a qualitative research, seeks to map the range of VV impacts in communities and provide an in-depth analysis of factors contributing to VV impacting change in communities. Study tools included content analysis of a longitudinal sample of impact videos produced, narratives of community correspondents using the Most Significant Change Technique (MSCT) and interviews with key informants. Using a multi-fold analysis, the paper seeks to gain holistic insights. At the first level, the paper profiles the Community Correspondents (CCs), spearheading change, and maps their personal and social context and their perceptions about VV in their personal lives. Secondly, at an organizational level, the paper maps the significance of impacts brought about in the CCs communities and their association, challenges and achievements while working with VV. Lastly, at the community level, it consists of analysis of the nature of impacts achieved and aspects influencing the same. Finally, the study critiques the functioning of Video Volunteers as a community media initiative using the tipping point theory emphasizing on the power of context that is constituted by their socio-cultural environment. It concludes how empowerment of its Community Correspondents, multifarious activities during pre and post video production, and other innovative mechanisms have enabled in center staging issues of marginalized communities and snowballing processes of change in communities.Keywords: community media, empowerment, participatory communication, social change
Procedia PDF Downloads 137473 Research on Land Use Pattern and Employment-Housing Space of Coastal Industrial Town Based on the Investigation of Liaoning Province, China
Authors: Fei Chen, Wei Lu, Jun Cai
Abstract:
During the Twelve Five period, China promulgated industrial policies promoting the relocation of energy-intensive industries to coastal areas in order to utilize marine shipping resources. Consequently, some major state-owned steel and gas enterprises have relocated and resulted in a large-scale coastal area development. However, some land may have been over-exploited with seamless coastline projects. To balance between employment and housing, new industrial coastal towns were constructed to support the industrial-led development. In this paper, we adopt a case-study approach to closely examine the development of several new industrial coastal towns of Liaoning Province situated in the Bohai Bay area, which is currently under rapid economic growth. Our investigations reflect the common phenomenon of long distance commuting and a massive amount of vacant residences. More specifically, large plant relocation caused hundreds of kilometers of daily commute and enterprises had to provide housing subsidies and education incentives to motivate employees to relocate to coastal areas. Nonetheless, many employees still refuse to relocate due to job stability, diverse needs of family members and access to convenient services. These employees averaged 4 hours of commute daily and some who lived further had to reside in temporary industrial housing units and subject to long-term family separation. As a result, only a small portion of employees purchase new coastal residences but mostly for investment and retirement purposes, leading to massive vacancy and ghost-town phenomenon. In contrast to the low demand, coastal areas tend to develop large amount of residences prior to industrial relocation, which may be directly related to local government finances. Some local governments have sold residential land to developers to general revenue to support the subsequent industrial development. Subject to the strong preference of ocean-view, residential housing developers tend to select coast-line land to construct new residential towns, which further reduces the access of marine resources for major industrial enterprises. This violates the original intent of developing industrial coastal towns and drastically limits the availability of marine resources. Lastly, we analyze the co-existence of over-exploiting residential areas and massive vacancies in reference to the demand and supply of land, as well as the demand of residential housing units with the choice criteria of enterprise employees.Keywords: coastal industry town, commuter traffic, employment-housing space, outer suburb industrial area
Procedia PDF Downloads 221472 Inhibition of Mild Steel Corrosion in Hydrochloric Acid Medium Using an Aromatic Hydrazide Derivative
Authors: Preethi Kumari P., Shetty Prakasha, Rao Suma A.
Abstract:
Mild steel has been widely employed as construction materials for pipe work in the oil and gas production such as down hole tubular, flow lines and transmission pipelines, in chemical and allied industries for handling acids, alkalis and salt solutions due to its excellent mechanical property and low cost. Acid solutions are widely used for removal of undesirable scale and rust in many industrial processes. Among the commercially available acids hydrochloric acid is widely used for pickling, cleaning, de-scaling and acidization of oil process. Mild steel exhibits poor corrosion resistance in presence of hydrochloric acid. The high reactivity of mild steel in presence of hydrochloric acid is due to the soluble nature of ferrous chloride formed and the cementite phase (Fe3C) normally present in the steel is also readily soluble in hydrochloric acid. Pitting attack is also reported to be a major form of corrosion in mild steel in the presence of high concentrations of acids and thereby causing the complete destruction of metal. Hydrogen from acid reacts with the metal surface and makes it brittle and causes cracks, which leads to pitting type of corrosion. The use of chemical inhibitor to minimize the rate of corrosion has been considered to be the first line of defense against corrosion. In spite of long history of corrosion inhibition, a highly efficient and durable inhibitor that can completely protect mild steel in aggressive environment is yet to be realized. It is clear from the literature review that there is ample scope for the development of new organic inhibitors, which can be conveniently synthesized from relatively cheap raw materials and provide good inhibition efficiency with least risk of environmental pollution. The aim of the present work is to evaluate the electrochemical parameters for the corrosion inhibition behavior of an aromatic hydrazide derivative, 4-hydroxy- N '-[(E)-1H-indole-2-ylmethylidene)] benzohydrazide (HIBH) on mild steel in 2M hydrochloric acid using Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques at 30-60 °C. The results showed that inhibition efficiency increased with increase in inhibitor concentration and decreased marginally with increase in temperature. HIBH showed a maximum inhibition efficiency of 95 % at 8×10-4 M concentration at 30 °C. Polarization curves showed that HIBH act as a mixed-type inhibitor. The adsorption of HIBH on mild steel surface obeys the Langmuir adsorption isotherm. The adsorption process of HIBH at the mild steel/hydrochloric acid solution interface followed mixed adsorption with predominantly physisorption at lower temperature and chemisorption at higher temperature. Thermodynamic parameters for the adsorption process and kinetic parameters for the metal dissolution reaction were determined.Keywords: electrochemical parameters, EIS, mild steel, tafel polarization
Procedia PDF Downloads 336471 Development of a Culturally Safe Wellbeing Intervention Tool for and with the Inuit in Quebec
Authors: Liliana Gomez Cardona, Echo Parent-Racine, Joy Outerbridge, Arlene Laliberté, Outi Linnaranta
Abstract:
Suicide rates among Inuit in Nunavik are six to eleven times larger than the Canadian average. The colonization, religious missions, residential schools as well as economic and political marginalization are factors that have challenged the well-being and mental health of these populations. In psychiatry, screening for mental illness is often done using questionnaires with which the patient is expected to respond how often he/she has certain symptoms. However, the Indigenous view of mental wellbeing may not fit well with this approach. Moreover, biomedical treatments do not always meet the needs of Indigenous peoples because they do not understand the culture and traditional healing methods that persist in many communities. Assess whether the questionnaires used to measure symptoms, commonly used in psychiatry are appropriate and culturally safe for the Inuit in Quebec. Identify the most appropriate tool to assess and promote wellbeing and follow the process necessary to improve its cultural sensitivity and safety for the Inuit population. Qualitative, collaborative, and participatory action research project which respects First Nations and Inuit protocols and the principles of ownership, control, access, and possession (OCAP). Data collection based on five focus groups with stakeholders working with these populations and members of Indigenous communities. Thematic analysis of the data collected and emerging through an advisory group that led a revision of the content, use, and cultural and conceptual relevance of the instruments. The questionnaires measuring psychiatric symptoms face significant limitations in the local indigenous context. We present the factors that make these tools not relevant among Inuit. Although the scale called Growth and Empowerment Measure (GEM) was originally developed among Indigenous in Australia, the Inuit in Quebec found that this tool comprehends critical aspects of their mental health and wellbeing more respectfully and accurately than questionnaires focused on measuring symptoms. We document the process of cultural adaptation of this tool which was supported by community members to create a culturally safe tool that helps in resilience and empowerment. The cultural adaptation of the GEM provides valuable information about the factors affecting wellbeing and contributes to mental health promotion. This process improves mental health services by giving health care providers useful information about the Inuit population and their clients. We believe that integrating this tool in interventions can help create a bridge to improve communication between the Indigenous cultural perspective of the patient and the biomedical view of health care providers. Further work is needed to confirm the clinical utility of this tool in psychological and psychiatric intervention along with social and community services.Keywords: cultural adaptation, cultural safety, empowerment, Inuit, mental health, Nunavik, resiliency
Procedia PDF Downloads 118470 Mobulid Ray Fishery Characteristics and Trends in East Java to Inform Management Decisions
Authors: Muhammad G. Salim, Betty J.L. Laglbauer, Sila K. Sari, Irianes C. Gozali, Fahmi, Didik Rudianto, Selvia Oktaviyani, Isabel Ender
Abstract:
Muncar, East Java, is one of the largest artisanal fisheries in Indonesia. Sharks and rays are caught as both target and bycatch, for local meat consumption and with some derived products exported. Of the seven mobulid ray species occurring in Indonesia, five have been recorded as retained bycatch at Muncar fishing port: the spinetail devil ray (Mobula mobular), the bentfin devil ray (Mobula thurstoni), the sicklefin devil ray (Mobula tarapacana), the oceanic manta ray (Mobula birostris) and the reef manta ray (Mobula alfredi). Both manta ray species are listed as Vulnerable by the International Union for the Conservation of Nature and are protected in Indonesia despite still being captured as bycatch, while all the three devil ray species mentioned here are listed as Endangered and do not currently benefit from any protection in Indonesian waters. Mobulid landings in East Java are caused primarily by small-scale drift gillnets but they also occasionally occur on longlines and in purse-seines operating off the coast of East Java and occasionally in fishing grounds located as far as the Makassar and Sumba Straits. Landing trends from 2015-2019 (non-continuous surveys) revealed that the highest abundance of mobulid rays at Muncar fishing port occurs during the upwelling season from June-October. During El-Nino or above-average temperature years, this may extend until November (such as in 2015 and 2019). The strong seasonal upwelling along the East Java coast is linked to higher zooplankton abundance (inferred from chlorophyll-a sea-surface concentrations), on which mobulids forage, along with teleost fishes constituting the primary target of gillnet fisheries in the Bali Strait. Mobulid ray landings in Muncar were dominated by Mobula mobular, followed by M. thurstoni, M. tarapacana, M. birostris and M. alfredi, however, the catch varied across years and seasons. A majority of immature individuals were recorded in M. mobular and M. thurstoni, and slight decreases in landings, despite no known changes in fishing effort, were observed across the upwelling seasons of 2015-2018 for M. mobular. While all mobulids are listed on Appendix II of the Convention on International Trade in Endangered Species, which regulates international trade in gill plates sought after in the Chinese Medicine Trade, local and national-level management measures are required to sustain mobulid populations. The findings presented here provide important baseline data, from which potential management approaches can be identified.Keywords: devil ray, mobulid, manta ray, Indonesia
Procedia PDF Downloads 178469 Consumer Behavior and Attitudes of Green Advertising: A Collaborative Study with Three Companies to Educate Consumers
Authors: Mokhlisur Rahman
Abstract:
Consumers' understanding of the products depends on what levels of information the advertisement contains. Consumers' attitudes vary widely depending on factors such as their level of environmental awareness, their perception of the company's motives, and the perceived effectiveness of the advertising campaign. Considering the growing eco-consciousness among consumers and their concern for the environment, strategies for green advertising have become equally significant for companies to attract new consumers. It is important to understand consumers' habits of purchasing, knowledge, and attitudes regarding eco-friendly products depending on promotion because of the limitless options of the products in the market. Additionally, encouraging consumers to buy sustainable products requires a platform that can message the world that being a stakeholder in sustainability is possible if consumers show eco-friendly behavior on a larger scale. Social media platforms provide an excellent atmosphere to promote companies' sustainable efforts to be connected engagingly with their potential consumers. The unique strategies of green advertising use techniques to carry information and rewards for the consumers. This study aims to understand the consumer behavior and effectiveness of green advertising by experimenting in collaboration with three companies in promoting their eco-friendly products using green designs on the products. The experiment uses three sustainable personalized offerings, Nike shoes, H&M t-shirts, and Patagonia school bags. The experiment uses a pretest and posttest design. 300 randomly selected participants take part in this experiment and survey through Facebook, Twitter, and Instagram. Nike, H&M, and Patagonia share the post of the experiment on their social media homepages with a video advertisement for the three products. The consumers participate in a pre-experiment online survey before making a purchase decision to assess their attitudes and behavior toward eco-friendly products. The audio-only feature explains the product's information, like their use of recycled materials, their manufacturing methods, sustainable packaging, and their impact on the environment during the purchase while the consumer watches the product video. After making a purchase, consumers take a post-experiment survey to know their perception and behavior toward eco-friendly products. For the data analysis, descriptive statistical tools mean, standard deviation, and frequencies measure the pre- and post-experiment survey data. The inferential statistical tool paired sample t-test measures the difference in consumers' behavior and attitudes between pre-purchase and post-experiment survey results. This experiment provides consumers ample time to consider many aspects rather than impulses. This research provides valuable insights into how companies can adopt sustainable and eco-friendly products. The result set a target for the companies to achieve a sustainable production goal that ultimately supports companies' profit-making and promotes consumers' well-being. This empowers consumers to make informed choices about the products they purchase and support their companies of interest.Keywords: green-advertising, sustainability, consumer-behavior, social media
Procedia PDF Downloads 86468 Impact of pH Control on Peptide Profile and Antigenicity of Whey Hydrolysates
Authors: Natalia Caldeira De Carvalho, Tassia Batista Pessato, Luis Gustavo R. Fernandes, Ricardo L. Zollner, Flavia Maria Netto
Abstract:
Protein hydrolysates are ingredients of enteral diets and hypoallergenic formulas. Enzymatic hydrolysis is the most commonly used method for reducing the antigenicity of milk protein. The antigenicity and physicochemical characteristics of the protein hydrolysates depend on the reaction parameters. Among them, pH has been pointed out as of the major importance. Hydrolysis reaction in laboratory scale is commonly carried out under controlled pH (pH-stat). However, from the industrial point of view, controlling pH during hydrolysis reaction may be infeasible. This study evaluated the impact of pH control on the physicochemical properties and antigenicity of the hydrolysates of whey proteins with Alcalase. Whey protein isolate (WPI) solutions containing 3 and 7 % protein (w/v) were hydrolyzed with Alcalase 50 and 100 U g-1 protein at 60°C for 180 min. The reactions were carried out under controlled and uncontrolled pH conditions. Hydrolyses performed under controlled pH (pH-stat) were initially adjusted and maintained at pH 8.5. Hydrolyses carried out without pH control were initially adjusted to pH 8.5. Degree of hydrolysis (DH) was determined by OPA method, peptides profile was evaluated by HPLC-RP, and molecular mass distribution by SDS-PAGE/Tricine. The residual α-lactalbumin (α-La) and β-lactoglobulin (β-Lg) concentrations were determined using commercial ELISA kits. The specific IgE and IgG binding capacity of hydrolysates was evaluated by ELISA technique, using polyclonal antibodies obtained by immunization of female BALB/c mice with α-La, β-Lg and BSA. In hydrolysis under uncontrolled pH, the pH dropped from 8.5 to 7.0 during the first 15 min, remaining constant throughout the process. No significant difference was observed between the DH of the hydrolysates obtained under controlled and uncontrolled pH conditions. Although all hydrolysates showed hydrophilic character and low molecular mass peptides, hydrolysates obtained with and without pH control exhibited different chromatographic profiles. Hydrolysis under uncontrolled pH released, predominantly, peptides between 3.5 and 6.5 kDa, while hydrolysis under controlled pH released peptides smaller than 3.5 kDa. Hydrolysis with Alcalase under all conditions studied decreased by 99.9% the α-La and β-Lg concentrations in the hydrolysates detected by commercial kits. In general, β-Lg concentrations detected in the hydrolysates obtained under uncontrolled pH were significantly higher (p<0.05) than those detected in hydrolysates produced with pH control. The anti-α-La and anti-β-Lg IgE and IgG responses to all hydrolysates decreased significantly compared to WPI. Levels of specific IgE and IgG to the hydrolysates were below 25 and 12 ng ml-1, respectively. Despite the differences in peptide composition and α-La and β-Lg concentrations, no significant difference was found between IgE and IgG binding capacity of hydrolysates obtained with or without pH control. These results highlight the impact of pH on the hydrolysates characteristics and their concentrations of antigenic protein. Divergence between the antigen detection by commercial ELISA kits and specific IgE and IgG binding response was found in this study. This result shows that lower protein detection does not imply in lower protein antigenicity. Thus, the use of commercial kits for allergen contamination analysis should be cautious.Keywords: allergy, enzymatic hydrolysis, milk protein, pH conditions, physicochemical characteristics
Procedia PDF Downloads 302467 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation
Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen
Abstract:
Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling
Procedia PDF Downloads 88466 An Assessment of Suitable Alternative Public Transport System in Mid-Sized City of India
Authors: Sanjeev Sinha, Samir Saurav
Abstract:
The rapid growth of urban areas in India has led to transportation challenges like traffic congestion and an increase in accidents. Despite efforts by state governments and local administrations to improve urban transport, the surge in private vehicles has worsened the situation. Patna, located in Bihar State, is an example of the trend of increasing reliance on private motor vehicles, resulting in vehicular congestion and emissions. The existing transportation infrastructure is inadequate to meet future travel demands, and there has been a notable increase in the share of private vehicles in the city. Additionally, there has been a surge in economic activities in the region, which has increased the demand for improved travel convenience and connectivity. To address these challenges, a study was conducted to assess the most suitable transit mode for the proposed transit corridor outlined in the Comprehensive Mobility Plan (CMP) for Patna. The study covered four stages: developing screening criteria, evaluating parameters for various alternatives, qualitative and quantitative evaluations of alternatives, and implementation options for the most viable alternative. The study suggests that a mass transit system such as a metro rail is necessary to enhance Patna's urban public transport system. The New Metro Policy 2017 outlines specific prerequisites for submitting a Metro Rail Project Proposal to the Ministry of Housing and Urban Affairs (MoHUA), including the preparation of a CMP, the formation of an Urban Metropolitan Transport Authority (UMTA), the creation of an Alternative Analysis Report, the development of a Detailed Project Report, a Multi-Modal Integration Plan, and a Transit-Oriented Development (TOD) Plan. In 2018, the Comprehensive Mobility Plan for Patna was prepared, setting the stage for the subsequent steps in the metro rail project proposal. The results indicated that from the screening and analysis of qualitative parameters for different alternative modes in Patna, it is inferred that the Metro Rail and Monorail score 82.25 and 70.50, respectively, on a scale of 100. Based on the initial analysis and alternative evaluation in the form of quantitative analysis, the Metro Rail System significantly outperformed the Monorail system. The Metro Rail System has a positive Economic Net Present Value (ENPV) at a 14% internal rate of return, while the Monorail has a negative value. In conclusion, the study recommends choosing metro rail over monorail for the proposed transit corridor in Patna. However, the lack of broad-based technical expertise may result in implementation delays and increased costs for monorail.Keywords: comprehensive mobility plan, alternative analysis, mobility corridors, mass transit system
Procedia PDF Downloads 119465 The Food and Nutritional Effects of Smallholders’ Participation in Milk Value Chain in Ethiopia
Authors: Geday Elias, Montaigne Etienne, Padilla Martine, Tollossa Degefa
Abstract:
Smallholder farmers’ participation in agricultural value chain identified as a pathway to get out of poverty trap in Ethiopia. The smallholder dairy activities have a huge potential in poverty reduction through enhancing income, achieving food and nutritional security in the country. However, much less is known about the effects of smallholder’s participation in milk value chain on household food security and nutrition. This paper therefore, aims at evaluating the effects of smallholders’ participation in milk value chain on household food security taking in to account the four pillars of food security measurements (availability, access, utilization and stability). Using a semi-structured interview, a cross sectional farm household data collected from a randomly selected sample of 333 households (170 in Amhara and 163 in Oromia regions).Binary logit and propensity score matching( PSM) models are employed to examine the mechanisms through which smallholder’s participation in the milk value chain affects household food security where crop production, per capita calorie intakes, diet diversity score, and food insecurity access scale are used to measure food availability, access, utilization and stability respectively. Our findings reveal from 333 households, only 34.5% of smallholder farmers are participated in the milk value chain. Limited access to inputs and services, limited access to inputs markets and high transaction costs are key constraints for smallholders’ limited access to the milk value chain. To estimate the true average participation effects of milk value chain for participated households, the outcome variables (food security) of farm households who participated in milk value chain are compared with the outcome variables if the farm households had not participated. The PSM analysis reveals smallholder’s participation in milk value chain has a significant positive effect on household income, food security and nutrition. Smallholder farmers who are participated in milk chain are better by 15 quintals crops production and 73 percent of per capita calorie intakes in food availability and access respectively than smallholder farmers who are not participated in the market. Similarly, the participated households are better in dietary quality by 112 percents than non-participated households. Finally, smallholders’ who are participated in milk value chain are better in reducing household vulnerability to food insecurity by an average of 130 percent than non participated households. The results also shows income earned from milk value chain participation contributed to reduce capital’s constraints of the participated households’ by higher farm income and total household income by 5164 ETB and 14265 ETB respectively. This study therefore, confirms the potential role of smallholders’ participation in food value chain to get out of poverty trap through improving rural household income, food security and nutrition. Therefore, identified the determinants of smallholder participation in milk value chain and the participation effects on food security in the study areas are worth considering as a positive knock for policymakers and development agents to tackle the poverty trap in the study area in particular and in the country in general.Keywords: effects, food security and nutrition, milk, participation, smallholders, value chain
Procedia PDF Downloads 339464 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 119463 A Survey of Digital Health Companies: Opportunities and Business Model Challenges
Authors: Iris Xiaohong Quan
Abstract:
The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.Keywords: digital health, business models, entrepreneurship opportunities, healthcare
Procedia PDF Downloads 183462 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 239461 Predicting Susceptibility to Coronary Artery Disease using Single Nucleotide Polymorphisms with a Large-Scale Data Extraction from PubMed and Validation in an Asian Population Subset
Authors: K. H. Reeta, Bhavana Prasher, Mitali Mukerji, Dhwani Dholakia, Sangeeta Khanna, Archana Vats, Shivam Pandey, Sandeep Seth, Subir Kumar Maulik
Abstract:
Introduction Research has demonstrated a connection between coronary artery disease (CAD) and genetics. We did a deep literature mining using both bioinformatics and manual efforts to identify the susceptible polymorphisms in coronary artery disease. Further, the study sought to validate these findings in an Asian population. Methodology In first phase, we used an automated pipeline which organizes and presents structured information on SNPs, Population and Diseases. The information was obtained by applying Natural Language Processing (NLP) techniques to approximately 28 million PubMed abstracts. To accomplish this, we utilized Python scripts to extract and curate disease-related data, filter out false positives, and categorize them into 24 hierarchical groups using named Entity Recognition (NER) algorithms. From the extensive research conducted, a total of 466 unique PubMed Identifiers (PMIDs) and 694 Single Nucleotide Polymorphisms (SNPs) related to coronary artery disease (CAD) were identified. To refine the selection process, a thorough manual examination of all the studies was carried out. Specifically, SNPs that demonstrated susceptibility to CAD and exhibited a positive Odds Ratio (OR) were selected, and a final pool of 324 SNPs was compiled. The next phase involved validating the identified SNPs in DNA samples of 96 CAD patients and 37 healthy controls from Indian population using Global Screening Array. ResultsThe results exhibited out of 324, only 108 SNPs were expressed, further 4 SNPs showed significant difference of minor allele frequency in cases and controls. These were rs187238 of IL-18 gene, rs731236 of VDR gene, rs11556218 of IL16 gene and rs5882 of CETP gene. Prior researches have reported association of these SNPs with various pathways like endothelial damage, susceptibility of vitamin D receptor (VDR) polymorphisms, and reduction of HDL-cholesterol levels, ultimately leading to the development of CAD. Among these, only rs731236 had been studied in Indian population and that too in diabetes and vitamin D deficiency. For the first time, these SNPs were reported to be associated with CAD in Indian population. Conclusion: This pool of 324 SNP s is a unique kind of resource that can help to uncover risk associations in CAD. Here, we validated in Indian population. Further, validation in different populations may offer valuable insights and contribute to the development of a screening tool and may help in enabling the implementation of primary prevention strategies targeted at the vulnerable population.Keywords: coronary artery disease, single nucleotide polymorphism, susceptible SNP, bioinformatics
Procedia PDF Downloads 76460 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software
Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi
Abstract:
Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.Keywords: climate change, GIS, interpolation, co-kriging
Procedia PDF Downloads 126459 Ichthyofauna and Population Status at Indus River Downstream, Sindh-Pakistan
Authors: M. K. Sheikh, Y. M. Laghari., P. K. Lashari., N. T. Narejo
Abstract:
The Indus River is one of the longest important rivers of the world in Asia that flows southward through Pakistan, merges into the Arabian Sea near the port city of Karachi in Sindh Province, and forms the Indus Delta. Fish are an important resource for humans worldwide, especially as food. In fish, healthy nutriments are present which are not found in any other meat source because it have a huge quantity of omega- 3 fatty acids, which are very essential for the human body. Ichthyologic surveys were conducted to explore the diversity of freshwater fishes, distribution, abundance and current status of the fishes at different spatial scale of the downstream, Indus River. Total eight stations were selected namely Railo Miyan (RM), Karokho (Kk), Khanpur (Kp), Mullakatiyar (Mk), Wasi Malook Shah (WMS), Branch Morie (BM), Sujawal (Sj) and Jangseer (JS). The study was carried in the period of January 2016 to December 2019 to identify River and biodiversity threats and to suggest recommendations for conservation. The data were analysed by different population diversity index. Altogether, 124 species were recorded belonging to 12 Orders and 43 Families from the downstream of Indus River. Among 124 species, 29% belong to high commercial value and 35% were trash fishes. 31% of fishes were identified as marine/estuarine origin (migratory) and 05% were exotic fish species. Perciformes is the most predominated order, contributing to 41% of families. Among 43 families, the family Cyprinidae was the richest family from all localities of downstream, represented by 24% of fish species demonstrating a significant dominance in the number of species. A significant difference was observed for species abundance in between all sites, the maximum abundance species were found at first location RM having 115 species and minimum observed at the last station JS 56 genera. In the recorded Ichthyofauna, seven groups were found according to the International Union for Conservation of Nature status (IUCN), where a high species ratio was collected, in Least Concern (LC) having 94 species, 11 were found as not evaluated (NE), whereas 8 was identified as near threatened (NT), 1 was recorded as critically endangered (CR), 11 were collected as data deficient (DD), and while 8 was observed as vulnerable (VU) and 3 endangered (EN) species. Different diversity index has been used extensively in environmental studies to estimate the species richness and abundance of ecosystems outputs of their wellness; a positive environment (biodiversity rich) with species at RM had an environmental wellness and biodiversity levels of 4.566% while a negative control environment (biodiversity poor) on last station JS had an environmental wellness and biodiversity levels of 3.931%. The status of fish biodiversity and river has been found under serious threat. Due to the lower diversity of fishes, it became not only venerable for fish but also risky for fishermen. Necessary steps are recommended to protect the biodiversity by conducting further conservative research in this area.Keywords: ichthyofaunal biodiversity, threatened species, diversity index, Indus River downstream
Procedia PDF Downloads 177458 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 356457 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 195456 Synthesis and Characterisations of Cordierite Bonded Porous SiC Ceramics by Sol Infiltration Technique
Authors: Sanchita Baitalik, Nijhuma Kayal, Omprakash Chakrabarti
Abstract:
Recently SiC ceramics have been a focus of interest in the field of porous materials due to their unique combination of properties and hence they are considered as an ideal candidate for catalyst supports, thermal insulators, high-temperature structural materials, hot gas particulate separation systems etc. in different industrial processes. Several processing methods are followed for fabrication of porous SiC at low temperatures but all these methods are associated with several disadvantages. Therefore processing of porous SiC ceramics at low temperatures is still challenging. Concerning that of incorporation of secondary bond phase additives by an infiltration technique should result in a homogenous distribution of bond phase in the final ceramics. Present work is aimed to synthesis cordierite (2MgO.2Al2O3.5SiO2) bonded porous SiC ceramics following incorporation of sol-gel bond phase precursor into powder compacts of SiC and heat treating the infiltrated body at 1400 °C. In this paper the primary aim was to study the effect of infiltration of a precursor sol of cordierite into a porous SiC powder compact prepared with pore former of different particle sizes on the porosity, pore size, microstructure and the mechanical properties of the porous SiC ceramics. Cordierite sol was prepared by mixing a solution of magnesium nitrate hexahydrate and aluminium nitrate nonahydrate in 2:4 molar ratio in ethanol another solution containing tetra-ethyl orthosilicate and ethanol in 1:3 molar ratio followed by stirring for several hours. Powders of SiC (α-SiC; d50 =22.5 μm) and 10 wt. % polymer microbead of two sizes 8 and 50µm as the pore former were mixed in a suitable liquid medium, dried and pressed in the form of bars (50×20×16 mm3) at 23 MPa pressure. The well-dried bars were heat treated at 1100° C for 4 h with a hold at 750 °C for 2 h to remove the pore former. Bars were evacuated for 2 hr upto 0.3 mm Hg pressure into a vacuum chamber and infiltrated with cordierite precursor sol. The infiltrated samples were dried and the infiltration process was repeated until the weight gain became constant. Finally the infiltrated samples were sintered at 1400 °C to prepare cordierite bonded porous SiC ceramics. Porous ceramics prepared with 8 and 50 µm sized microbead exhibited lower oxidation degrees of respectively 7.8 and 4.8 % than the sample (23 %) prepared with no microbead. Depending on the size of pore former, the porosity of the final ceramic varied in the range of 36 to 40 vol. % with a variation of flexural strength from 33.7 to 24.6 MPa. XRD analysis showed major crystalline phases of the ceramics as SiC, SiO2 and cordierite. Two forms of cordierite, α-(hexagonal) and µ-(cubic), were detected by the XRD analysis. The SiC particles were observed to be bonded both by cristobalite with fish scale morphology and cordierite with rod shape morphology and thereby formed a porous network. The material and mechanical properties of cordierite bonded porous SiC ceramics are good in agreement to carry out further studies like thermal shock, corrosion resistance etc.Keywords: cordierite, infiltration technique, porous ceramics, sol-gel
Procedia PDF Downloads 271455 Building Community through Discussion Forums in an Online Accelerated MLIS Program: Perspectives of Instructors and Students
Authors: Mary H Moen, Lauren H. Mandel
Abstract:
Creating a sense of community in online learning is important for student engagement and success. The integration of discussion forums within online learning environments presents an opportunity to explore how this computer mediated communications format can cultivate a sense of community among students in accelerated master’s degree programs. This research has two aims, to delve into the ways instructors utilize this communications technology to create community and to understand the feelings and experiences of graduate students participating in these forums in regard to its effectiveness in community building. This study is a two-phase approach encompassing qualitative and quantitative methodologies. The data will be collected at an online accelerated Master of Library and Information Studies program at a public university in the northeast of the United States. Phase 1 is a content analysis of the syllabi from all courses taught in the 2023 calendar year, which explores the format and rules governing discussion forum assignments. Four to six individual interviews of department faculty and part time faculty will also be conducted to illuminate their perceptions of the successes and challenges of their discussion forum activities. Phase 2 will be an online survey administered to students in the program during the 2023 calendar year. Quantitative data will be collected for statistical analysis, and short answer responses will be analyzed for themes. The survey is adapted from the Classroom Community Scale Short-Form (CSS-SF), which measures students' self-reported responses on their feelings of connectedness and learning. The prompts will contextualize the items from their experience in discussion forums during the program. Short answer responses on the challenges and successes of using discussion forums will be analyzed to gauge student perceptions and experiences using this type of communication technology in education. This research study is in progress. The authors anticipate that the findings will provide a comprehensive understanding of the varied approaches instructors use in discussion forums for community-building purposes in an accelerated MLIS program. They predict that the more varied, flexible, and consistent student uses of discussion forums are, the greater the sense of community students will report. Additionally, students’ and instructors’ perceptions and experiences within these forums will shed light on the successes and challenges faced, thereby offering valuable recommendations for enhancing online learning environments. The findings are significant because they can contribute actionable insights for instructors, educational institutions, and curriculum designers aiming to optimize the use of discussion forums in online accelerated graduate programs, ultimately fostering a richer and more engaging learning experience for students.Keywords: accelerated online learning, discussion forums, LIS programs, sense of community, g
Procedia PDF Downloads 83454 Rural Nurses as a Consistent Resource
Authors: Meirav Eshkol, Miri Blaufeld, Rinat Basal
Abstract:
Aim: The working environment in rural clinics is often isolated and distant from major health centers. In these circumstances, rural health care faces numerous challenges. The hope is that, in the immediate future and in the medium and long range, the rural nursing staff will realize their full professional and personal potential to their own satisfaction and to the health and welfare of their patients. Background: Rural nurses work mostly alone or with very few colleagues, and have the authority to make professional decisions, a fact which often requires them to make critical decisions in pressure situations. In addition, the expectations set for these nurses are extremely high, a fact which requires them to be extremely skilled and to fulfill their professional potential. They are required to provide high-quality and comprehensive care to the individual, the family, and the community and to maintain close interaction with the community. Work in a rural setting requires the flexibility to perform multiple tasks in an isolated setting, often far removed from major health centers. In order to maintain professional satisfaction for the rural nurse, expanded direction and training are required in professional know-how, and in the development of new and existing skills, toward the goal of treating a diverse population and to obtain a comprehensive view of the components of a diagnosis for treatment and to develop an understanding appropriate to the presented reality. Objective: To provide knowledge and to expand and develop professional skills in the prevention and advancement of health in the care of a diverse patient population. The development of strategies and skills for work under pressure alone instills expertise in performing multiple tasks in diverse disciplines. To reduce feelings of stress and burnout. Methodology: This course is the first and one of a kind in Clalit - the biggest health organisation in Israel. Observing and identifying the needs of the nurses in the field relating to the development of professional and personal skills defining goals and objectives, and determining the content of a course designed for rural nurses and kibbutz nurses who are not Clalit employees. Results: 43 nurses participated and 30 answered the feedback questionnaire. The rating of their experience was 4.33 (on a scale of 1-5, with 5 being the highest ranking). 92% indicated the importance of meeting with additional nurses to teach their colleagues. 83% of the nurses indicated an increased sense of organizational belonging. 60% indicated that the course helped to reduce feelings of stress and burnout in becoming a better rural nurse. 80% indicated that the course helped them establish intra-organizational professional cooperation and initiating processes. Conclusion: The course is an instrument which aids in increasing the feeling of organizational belonging, reducing feelings of stress and burnout, creation of relationships and cooperation both within and outside of the organization, increased the realization of the potential of the village nurse.Keywords: rural nurse, alone, burnout, multiple tasks
Procedia PDF Downloads 69453 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf
Authors: Simran Gill, Samuel Bartels
Abstract:
Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.Keywords: broadleaf weeds, nitrogen, iron, turfgrass
Procedia PDF Downloads 72452 Inhibition of Food Borne Pathogens by Bacteriocinogenic Enterococcus Strains
Authors: Neha Farid
Abstract:
Due to the abuse of antimicrobial medications in animal feed, the occurrence of multi-drug resistant (MDR) pathogens in foods is currently a growing public health concern on a global scale. MDR infections have the potential to penetrate the food chain by posing a serious risk to both consumers and animals. Food pathogens are those biological agents that have the tendency to cause pathogenicity in the host body upon ingestion. The major reservoirs of foodborne pathogens include food-producing fauna like cows, pigs, goats, sheep, deer, etc. The intestines of these animals are highly condensed with several different types of food pathogens. Bacterial food pathogens are the main cause of foodborne disease in humans; almost 66% of the reported cases of food illness in a year are caused by the infestation of bacterial food pathogens. When ingested, these pathogens reproduce and survive or form different kinds of toxins inside host cells causing severe infections. The genus Listeria consists of gram-positive, rod-shaped, non-spore-forming bacteria. The disease caused by Listeria monocytogenes is listeriosis or gastroenteritis, which induces fever, vomiting, and severe diarrhea in the affected body. Campylobacter jejuni is a gram-negative, curved-rod-shaped bacteria causing foodborne illness. The major source of Campylobacter jejuni is livestock and poultry; particularly, chicken is highly colonized with Campylobacter jejuni. Serious public health concerns include the widespread growth of bacteria that are resistant to antibiotics and the slowing in the discovery of new classes of medicines. The objective of this study is to provide some potential antibacterial activities with certain broad-range antibiotics and our desired bacteriocins, i.e., Enterococcus faecium from specific strains preventing microbial contamination pathways in order to safeguard the food by lowering food deterioration, contamination, and foodborne illnesses. The food pathogens were isolated from various sources of dairy products and meat samples. The isolates were tested for the presence of Listeria and Campylobacter by gram staining and biochemical testing. They were further sub-cultured on selective media enriched with the growth supplements for Listeria and Campylobacter. All six strains of Listeria and Campylobacter were tested against ten antibiotics. Campylobacter strains showed resistance against all the antibiotics, whereas Listeria was found to be resistant only against Nalidixic Acid and Erythromycin. Further, the strains were tested against the two bacteriocins isolated from Enterococcus faecium. It was found that bacteriocins showed better antimicrobial activity against food pathogens. They can be used as a potential antimicrobial for food preservation. Thus, the study concluded that natural antimicrobials could be used as alternatives to synthetic antimicrobials to overcome the problem of food spoilage and severe food diseases.Keywords: food pathogens, listeria, campylobacter, antibiotics, bacteriocins
Procedia PDF Downloads 71451 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 114