Search results for: circuit models
1580 The Role of Semi Open Spaces on Exploitation of Wind-Driven Ventilation
Authors: Paria Saadatjoo
Abstract:
Given that HVAC systems are the main sources of carbon dioxide producers, developing ways to reduce dependence on these systems and making use of natural resources is too important to achieve environmentally friendly buildings. A major part of building potential in terms of using natural energy resources depends on its physical features. So architectural decisions at the first step of the design process can influence the building's energy efficiency significantly. Implementation of semi-open spaces into solid apartment blocks inspired by the concept of courtyard in ancient buildings as a passive cooling strategy is currently enjoying great popularity. However, the analysis of these features and their effect on wind behavior at initial design steps is a difficult task for architects. The main objective of this research was to investigate the influence of semi-open to closed space ratio on airflow patterns in and around midrise buildings and introduce the best ratio in terms of harnessing natural ventilation. The main strategy of this paper was semi-experimental, and the research methodology was descriptive statistics. At the first step, by changing the terrace area, 6 models with various open to closed space ratios were created. These forms were then transferred to CFD software to calculate the primary indicators of natural ventilation potentials such as wind force coefficient, air flow rate, age of air distribution, etc. Investigations indicated that modifying the terrace area and, in other words, the open to closed space ratio influenced the wind force coefficient, airflow rate, and age of air distribution.Keywords: natural ventilation, wind, midrise, open space, energy
Procedia PDF Downloads 1701579 Neuroprotective Effects of Gly-Pro-Glu-Thr-Ala-Phe-Leu-Arg, a Peptide Isolated from Lupinus angustifolius L. Protein Hydrolysate
Authors: Maria Del Carmen Millan-Linares, Ana Lemus Conejo, Rocio Toscano, Alvaro Villanueva, Francisco Millan, Justo Pedroche, Sergio Montserrat-De La Paz
Abstract:
GPETAFLR (Glycine-Proline-Glutamine-Threonine-Alanine-Phenylalanine-Leucine-Arginine) is a peptide isolated from Lupinus angustifolius L. protein hydrolysate (LPH). Herein, the effect of this peptide was investigated in two different models of neuroinflammation: in the immortalized murine microglia cell line BV-2 and in a high-fat-diet-induced obesity mouse model. Methods and Results: Effects of GPETAFLR on neuroinflammation were evaluated by RT-qPCR, flow cytometry, and ELISA techniques. In BV-2 microglial cells, Lipopolysaccharides (LPS) enhanced the release of pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6) whereas GPETAFLR decreased pro-inflammatory cytokine levels and increased the release of the anti-inflammatory cytokine IL-10 in BV2 microglial cells. M1 (CCR7 and iNOS) and M2 (Arg-1 and Ym-1) polarization markers results showed how the GPETAFLR octapeptide was able to decrease M1 polarization marker expression and increase the M2 polarization marker expression compared to LPS. Animal model results indicate that GPETAFLR has an immunomodulatory capacity, both decreasing pro-inflammatory cytokine IL-6 and increasing the anti-inflammatory cytokine IL-10 in brain tissue. Polarization markers in the brain tissue were also modulated by GPETAFLR that decreased the pro-inflammatory expression (M1) and increased the anti-inflammatory expression (M2). Conclusion: Our results suggest that GPETAFLR isolated from LPH has significant potential for management of neuroinflammatory conditions and offer benefits derived from the consumption of Lupinus angustifolius L. in the prevention of neuroinflammatory-related diseases.Keywords: GPETAFLR peptide, BV-2 cell line, neuroinflammation, cytokines, high-fat-diet
Procedia PDF Downloads 1481578 Hemodynamics of a Cerebral Aneurysm under Rest and Exercise Conditions
Authors: Shivam Patel, Abdullah Y. Usmani
Abstract:
Physiological flow under rest and exercise conditions in patient-specific cerebral aneurysm models is numerically investigated. A finite-volume based code with BiCGStab as the linear equation solver is used to simulate unsteady three-dimensional flow field through the incompressible Navier-Stokes equations. Flow characteristics are first established in a healthy cerebral artery for both physiological conditions. The effect of saccular aneurysm on cerebral hemodynamics is then explored through a comparative analysis of the velocity distribution, nature of flow patterns, wall pressure and wall shear stress (WSS) against the reference configuration. The efficacy of coil embolization as a potential strategy of surgical intervention is also examined by modelling coil as a homogeneous and isotropic porous medium where the extended Darcy’s law, including Forchheimer and Brinkman terms, is applicable. The Carreau-Yasuda non-Newtonian blood model is incorporated to capture the shear thinning behavior of blood. Rest and exercise conditions correspond to normotensive and hypertensive blood pressures respectively. The results indicate that the fluid impingement on the outer wall of the arterial bend leads to abnormality in the distribution of wall pressure and WSS, which is expected to be the primary cause of the localized aneurysm. Exercise correlates with elevated flow velocity, vortex strength, wall pressure and WSS inside the aneurysm sac. With the insertion of coils in the aneurysm cavity, the flow bypasses the dilatation, leading to a decline in flow velocities and WSS. Particle residence time is observed to be lower under exercise conditions, a factor favorable for arresting plaque deposition and combating atherosclerosis.Keywords: 3D FVM, Cerebral aneurysm, hypertension, coil embolization, non-Newtonian fluid
Procedia PDF Downloads 2341577 A Study on the Effectiveness of Alternative Commercial Ventilation Inlets That Improve Energy Efficiency of Building Ventilation Systems
Authors: Brian Considine, Aonghus McNabola, John Gallagher, Prashant Kumar
Abstract:
Passive air pollution control devices known as aspiration efficiency reducers (AER) have been developed using aspiration efficiency (AE) concepts. Their purpose is to reduce the concentration of particulate matter (PM) drawn into a building air handling unit (AHU) through alterations in the inlet design improving energy consumption. In this paper an examination is conducted into the effect of installing a deflector system around an AER-AHU inlet for both a forward and rear-facing orientations relative to the wind. The results of the study found that these deflectors are an effective passive control method for reducing AE at various ambient wind speeds over a range of microparticles of varying diameter. The deflector system was found to induce a large wake zone at low ambient wind speeds for a rear-facing AER-AHU, resulting in significantly lower AE in comparison to without. As the wind speed increased, both contained a wake zone but have much lower concentration gradients with the deflectors. For the forward-facing models, the deflector system at low ambient wind speed was preferred at higher Stokes numbers but there was negligible difference as the Stokes number decreased. Similarly, there was no significant difference at higher wind speeds across the Stokes number range tested. The results demonstrate that a deflector system is a viable passive control method for the reduction of ventilation energy consumption.Keywords: air handling unit, air pollution, aspiration efficiency, energy efficiency, particulate matter, ventilation
Procedia PDF Downloads 1181576 The Role of Sustainable Financing Models for Smallholder Tree Growers in Ghana
Authors: Raymond Awinbilla
Abstract:
The call for tree planting has long been set in motion by the government of Ghana. The Forestry Commission encourages plantation development through numerous interventions including formulating policies and enacting legislations. However, forest policies have failed and that has generated a major concern over the vast gap between the intentions of national policies and the realities established. This study addresses three objectives;1) Assessing the farmers' response and contribution to the tree planting initiative, 2) Identifying socio-economic factors hindering the development of smallholder plantations as a livelihood strategy, and 3) Determining the level of support available for smallholder tree growers and the factors influencing it. The field work was done in 12 farming communities in Ghana. The article illuminates that farmers have responded to the call for tree planting and have planted both exotic and indigenous tree species. Farmers have converted 17.2% (369.48ha) of their total land size into plantations and have no problem with land tenure. Operations and marketing constraints include lack of funds for operations, delay in payment, low price of wood, manipulation of price by buyers, documentation by buyers, and no ready market for harvesting wood products. Environmental institutions encourage tree planting; the only exception is with the Lands Commission. Support availed to farmers includes capacity building in silvicultural practices, organisation of farmers, linkage to markets and finance. Efforts by the Government of Ghana to enhance forest resources in the country could rely on the input of local populations.Keywords: livelihood strategy, marketing constraints, environmental institutions, silvicultural practices
Procedia PDF Downloads 581575 Rheological Properties of Red Beet Root Juice Squeezed from Ultrasounicated Red Beet Root Slices
Authors: M. Çevik, S. Sabancı, D. Tezcan, C. Çelebi, F. İçier
Abstract:
Ultrasound technology is the one of the non-thermal food processing method in recent years which has been used widely in the food industry. Ultrasound application in the food industry is divided into two groups: low and high intensity ultrasound application. While low intensity ultrasound is used to obtain information about physicochemical properties of foods, high intensity ultrasound is used to extract bioactive components and to inactivate microorganisms and enzymes. In this study, the ultrasound pre-treatment at a constant power (1500 W) and fixed frequency (20 kHz) was applied to the red beetroot slices having the dimension of 25×25×50 mm at the constant temperature (25°C) for different application times (0, 5, 10, 15 and 20 min). The red beet root slices pretreated with ultrasonication was squeezed immediately. The changes on rheological properties of red beet root juice depending on ultrasonication duration applied to slices were investigated. Rheological measurements were conducted by using Brookfield viscometer (LVDV-II Pro, USA). Shear stress-shear rate data was obtained from experimental measurements for 0-200 rpm range by using spindle 18. Rheological properties of juice were determined by fitting this data to some rheological models (Newtonian, Bingham, Power Law, Herschel Bulkley). It was investigated that the best model was Power Law model for both untreated red beet root juice (R2=0.991, χ2=0.0007, RMSE=0.0247) and red beetroot juice produced from ultrasonicated slices (R2=0.993, χ2=0.0006, RMSE=0.0216 for 20 min pre-treatment). k (consistency coefficient) and n (flow behavior index) values of red beetroot juices were not affected from the duration of ultrasonication applied to the slices. Ultrasound treatment does not result in any changes on the rheological properties of red beetroot juice. This can be explained by lack of ability to homogenize of the intensity of applied ultrasound.Keywords: ultrasonication, rheology, red beet root slice, juice
Procedia PDF Downloads 4061574 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 4971573 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 4091572 Influence of Magnetized Water on the Split Tensile Strength of Concrete
Authors: Justine Cyril E. Nunag, Nestor B. Sabado Jr., Jienne Chester M. Tolosa
Abstract:
Concrete has high compressive strength but a low-tension strength. The small tensile strength of concrete is regarded as its primary weakness, which is why it is typically reinforced with steel, a material that is resistant to tension. Even with steel, however, cracking can occur. In strengthening concrete, only a few researchers have modified the water to be used in a concrete mix. This study aims to compare the split tensile strength of normal structural concrete to concrete prepared with magnetic water and a quick setting admixture. In this context, magnetic water is defined as tap water that has undergone a magnetic process to become magnetized water. To test the hypothesis that magnetized concrete leads to higher split tensile strength, twenty concrete specimens were made. There were five groups, each with five samples, that were differentiated by the number of cycles (0, 50, 100, and 150). The data from the Universal Testing Machine's split tensile strength were then analyzed using various statistical models and tests to determine the significant effect of magnetized water. The result showed a moderate (+0.579) but still significant degree of correlation. The researchers also discovered that using magnetic water for 50 cycles did not result in a significant increase in the concrete's split tensile strength, which influenced the analysis of variance. These results suggest that a concrete mix containing magnetic water and a quick-setting admixture alters the typical split tensile strength of normal concrete. Magnetic water has a significant impact on concrete tensile strength. The hardness property of magnetic water influenced the split tensile strength of concrete. In addition, a higher number of cycles results in a strong water magnetism. The laboratory test results show that a higher cycle translates to a higher tensile strength.Keywords: hardness property, magnetic water, quick-setting admixture, split tensile strength, universal testing machine
Procedia PDF Downloads 1461571 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa
Authors: Samy A. Khalil, U. Ali Rahoma
Abstract:
The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa
Procedia PDF Downloads 971570 Magnetized Cellulose Nanofiber Extracted from Natural Resources for the Application of Hexavalent Chromium Removal Using the Adsorption Method
Authors: Kebede Gamo Sebehanie, Olu Emmanuel Femi, Alberto Velázquez Del Rosario, Abubeker Yimam Ali, Gudeta Jafo Muleta
Abstract:
Water pollution is one of the most serious worldwide issues today. Among water pollution, heavy metals are becoming a concern to the environment and human health due to their non-biodegradability and bioaccumulation. In this study, a magnetite-cellulose nanocomposite derived from renewable resources is employed for hexavalent chromium elimination by adsorption. Magnetite nanoparticles were synthesized directly from iron ore using solvent extraction and co-precipitation technique. Cellulose nanofiber was extracted from sugarcane bagasse using the alkaline treatment and acid hydrolysis method. Before and after the adsorption process, the MNPs-CNF composites were evaluated using X-ray diffraction (XRD), Scanning electron microscope (SEM), Fourier transform infrared (FTIR), and Vibrator sample magnetometer (VSM), and Thermogravimetric analysis (TGA). The impacts of several parameters such as pH, contact time, initial pollutant concentration, and adsorbent dose on adsorption efficiency and capacity were examined. The kinetic and isotherm adsorption of Cr (VI) was also studied. The highest removal was obtained at pH 3, and it took 80 minutes to establish adsorption equilibrium. The Langmuir and Freundlich isotherm models were used, and the experimental data fit well with the Langmuir model, which has a maximum adsorption capacity of 8.27 mg/g. The kinetic study of the adsorption process using pseudo-first-order and pseudo-second-order equations revealed that the pseudo-second-order equation was more suited for representing the adsorption kinetic data. Based on the findings, pure MNPs and MNPs-CNF nanocomposites could be used as effective adsorbents for the removal of Cr (VI) from wastewater.Keywords: magnetite-cellulose nanocomposite, hexavalent chromium, adsorption, sugarcane bagasse
Procedia PDF Downloads 1291569 A Five-Year Follow-up Survey Using Regression Analysis Finds Only Maternal Age to Be a Significant Medical Predictor for Infertility Treatment
Authors: Lea Stein, Sabine Rösner, Alessandra Lo Giudice, Beate Ditzen, Tewes Wischmann
Abstract:
For many couples bearing children is a consistent life goal; however, it cannot always be fulfilled. Undergoing infertility treatment does not guarantee pregnancies and live births. Couples have to deal with miscarriages and sometimes even discontinue infertility treatment. Significant medical predictors for the outcome of infertility treatment have yet to be fully identified. To further our understanding, a cross-sectional five-year follow-up survey was undertaken, in which 95 women and 82 men that have been treated at the Women’s Hospital of Heidelberg University participated. Binary logistic regressions, parametric and non-parametric methods were used for our sample to determine the relevance of biological (infertility diagnoses, maternal and paternal age) and lifestyle factors (smoking, drinking, over- and underweight) on the outcome of infertility treatment (clinical pregnancy, live birth, miscarriage, dropout rate). During infertility treatment, 72.6% of couples became pregnant and 69.5% were able to give birth. Suffering from miscarriages 27.5% of couples and 20.5% decided to discontinue an unsuccessful fertility treatment. The binary logistic regression models for clinical pregnancies, live births and dropouts were statistically significant for the maternal age, whereas the paternal age in addition to maternal and paternal BMI, smoking, infertility diagnoses and infections, showed no significant predicting effect on any of the outcome variables. The results confirm an effect of maternal age on infertility treatment, whereas the relevance of other medical predictors remains unclear. Further investigations should be considered to increase our knowledge of medical predictors.Keywords: advanced maternal age, assisted reproductive technology, female factor, male factor, medical predictors, infertility treatment, reproductive medicine
Procedia PDF Downloads 1091568 Studying the Effects of Ruta Graveolens on Spontaneous Motor Activity, Skeletal Muscle Tone and Strychnine Induced Convulsions in Albino Mice and Rats
Authors: Shaban Saad, Syed Ahmed, Suher Aburawi, Isabel Fong
Abstract:
Ruta graveolens is a plant commonly found in north Africa and south Europe. It is reported that Ruta graveolens is used traditionally for epilepsy and some other illnesses. The acute and sub-acute effects of alcoholic extract residue were tested for possible anti-epileptic and skeletal muscle relaxation activity. The effect of extract on rat spontaneous motor activity (SMA) was also investigated using open filed. We previously proved the anti convulsant activity of the plant against pentylenetetrazol and electrically induced convulsions. Therefore in this study strychnine was used to induce convulsions in order to explore the mechanism of anti-convulsant activity of the plant. The skeletal muscle relaxation activity of Ruta graveolens was studied using pull-up and rod hanging tests in rats. At concentration of 5%w/v the extract protected mice against strychnine induced myoclonic jerks and death. The pull-up and rod hanging tests pointed to a skeletal muscle relaxant activity at higher concentrations. Ruta graveolens extract also significantly decreased the number of squares visited by rats in open field apparatus at all tested concentrations (3.5-20%w/v). However, the significant decrease in number of rearings was only noticed at concentrations of (15 and 20%w/v). The results indicate that Ruta graveolens contains compound(s) capable to inhibit convulsions, decrease SMA and/or diminish skeletal muscle tone in animal models. This data and the previously generated data together point to a general depression trend of CNS produced by Ruta graveolens.Keywords: Ruta graveolens, open field, skeletal muscle relaxation
Procedia PDF Downloads 4181567 PitMod: The Lorax Pit Lake Hydrodynamic and Water Quality Model
Authors: Silvano Salvador, Maryam Zarrinderakht, Alan Martin
Abstract:
Open pits, which are the result of mining, are filled by water over time until the water reaches the elevation of the local water table and generates mine pit lakes. There are several specific regulations about the water quality of pit lakes, and mining operations should keep the quality of groundwater above pre-defined standards. Therefore, an accurate, acceptable numerical model predicting pit lakes’ water balance and water quality is needed in advance of mine excavation. We carry on analyzing and developing the model introduced by Crusius, Dunbar, et al. (2002) for pit lakes. This model, called “PitMod”, simulates the physical and geochemical evolution of pit lakes over time scales ranging from a few months up to a century or more. Here, a lake is approximated as one-dimensional, horizontally averaged vertical layers. PitMod calculates the time-dependent vertical distribution of physical and geochemical pit lake properties, like temperature, salinity, conductivity, pH, trace metals, and dissolved oxygen, within each model layer. This model considers the effect of pit morphology, climate data, multiple surface and subsurface (groundwater) inflows/outflows, precipitation/evaporation, surface ice formation/melting, vertical mixing due to surface wind stress, convection, background turbulence and equilibrium geochemistry using PHREEQC and linking that to the geochemical reactions. PitMod, which is used and validated in over 50 mines projects since 2002, incorporates physical processes like those found in other lake models such as DYRESM (Imerito 2007). However, unlike DYRESM PitMod also includes geochemical processes, pit wall runoff, and other effects. In addition, PitMod is actively under development and can be customized as required for a particular site.Keywords: pit lakes, mining, modeling, hydrology
Procedia PDF Downloads 1581566 The Impact of Iso 9001 Certification on Brazilian Firms’ Performance: Insights from Multiple Case Studies
Authors: Matheus Borges Carneiro, Fabiane Leticia Lizarelli, José Carlos De Toledo
Abstract:
The evolution of quality management by companies was strongly enabled by, among others, ISO 9001 certification, which is considered a crucial requirement for several customers. Likewise, performance measurement provides useful insights for companies to identify the reflection of their decision-making process on their improvement. One of the most used performance measurement models is the balanced scorecard (BSC), which uses four perspectives to address a firm’s performance: financial, internal process, customer satisfaction, and learning and growth. Studies related to ISO 9001 and business performance have mostly adopted a quantitative approach to identify the standard’s causal effect on a firm’s performance. However, to verify how this influence may occur, an in-depth analysis within a qualitative approach is required. Therefore, this paper aims to verify the impact of ISO 9001:2015 on Brazilian firms’ performance based on the balanced scorecard perspective. Hence, nine certified companies located in the Southeast region of Brazil were studied through a multiple case study approach. Within this study, it was possible to identify the positive impact of ISO 9001 on firms’ overall performance, and four Critical Success Factors (CSFs) were identified as relevant on the linkage among ISO 9001 and firms’ performance: employee involvement, top management, process management, and customer focus. Due to the COVID-19 pandemic, the number of interviews was limited to the quality manager specialist, and the sample was limited since several companies were closed during the period of the study. This study presents an in-depth analysis of how the relationship between ISO 9001 certification and firms’ performance in a developing country is.Keywords: balanced scorecard, Brazilian firms’ performance, critical success factors, ISO 9001 certification, performance measurement
Procedia PDF Downloads 1981565 The Effects of Cost-Sharing Contracts on the Costs and Operations of E-Commerce Supply Chains
Authors: Sahani Rathnasiri, Pritee Ray, Sardar M. N. Isalm, Carlos A. Vega-Mejia
Abstract:
This study develops a cooperative game theory-based cost-sharing contract model for a business to consumer (B2C) e-commerce supply chain to minimize the overall supply chain costs and the individual costs within an information asymmetry scenario. The objective of this study is to address the issues of strategic interactions among the key players of the e-commerce supply chain operation, which impedes the optimal operational outcomes. Game theory has been included in the field of supply chain management to resolve strategic decision-making issues; however, most of the studies are limited only to two-echelons of the supply chains. Multi-echelon supply chain optimizations based on game-theoretic models are less explored in the previous literature. This study adopts a cooperative game model to focus on the common payoff of operations and addresses the issues of information asymmetry and coordination of a three-echelon e-commerce supply chain. The cost-sharing contract model integrates operational features such as production, inventory management and distribution with the contract related constraints. The outcomes of the model highlight the importance of maintaining lower operational costs by all players to obtain benefits from the cost-sharing contract. Further, the cost-sharing contract ensures true cost revelation, and hence eliminates the information asymmetry issues among the players. Comparing the results of the contract model with the de-centralized e-commerce supply chain operation further emphasizes that the cost-sharing contract derives Pareto-improved outcomes and minimizes the costs of overall e-commerce supply chain operation.Keywords: cooperative game theory, cost-sharing contract, e-commerce supply chain, information asymmetry
Procedia PDF Downloads 1281564 An Extensive Review of Drought Indices
Authors: Shamsulhaq Amin
Abstract:
Drought can arise from several hydrometeorological phenomena that result in insufficient precipitation, soil moisture, and surface and groundwater flow, leading to conditions that are considerably drier than the usual water content or availability. Drought is often assessed using indices that are associated with meteorological, agricultural, and hydrological phenomena. In order to effectively handle drought disasters, it is essential to accurately determine the kind, intensity, and extent of the drought using drought characterization. This information is critical for managing the drought before, during, and after the rehabilitation process. Over a hundred drought assessments have been created in literature to evaluate drought disasters, encompassing a range of factors and variables. Some models utilise solely hydrometeorological drivers, while others employ remote sensing technology, and some incorporate a combination of both. Comprehending the entire notion of drought and taking into account drought indices along with their calculation processes are crucial for researchers in this discipline. Examining several drought metrics in different studies requires additional time and concentration. Hence, it is crucial to conduct a thorough examination of approaches used in drought indices in order to identify the most straightforward approach to avoid any discrepancies in numerous scientific studies. In case of practical application in real-world, categorizing indices relative to their usage in meteorological, agricultural, and hydrological phenomena might help researchers maximize their efficiency. Users have the ability to explore different indexes at the same time, allowing them to compare the convenience of use and evaluate the benefits and drawbacks of each. Moreover, certain indices exhibit interdependence, which enhances comprehension of their connections and assists in making informed decisions about their suitability in various scenarios. This study provides a comprehensive assessment of various drought indices, analysing their types and computation methodologies in a detailed and systematic manner.Keywords: drought classification, drought severity, drought indices, agriculture, hydrological
Procedia PDF Downloads 411563 ADP Approach to Evaluate the Blood Supply Network of Ontario
Authors: Usama Abdulwahab, Mohammed Wahab
Abstract:
This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem
Procedia PDF Downloads 5061562 Smart Technology Work Practices to Minimize Job Pressure
Authors: Babar Rasheed
Abstract:
The organizations are in continuous effort to increase their yield and to retain their associates, employees. Technology is considered an integral part of attaining apposite work practices, work environment, and employee engagement. Unconsciously, these advanced practices like work from home, personalized intra-network are disturbing employee work-life balance which ultimately increases psychological pressure on employees. The smart work practice is to develop business models and organizational practices with enhanced employee engagement, minimum trouncing of organization resources with persistent revenue and positive addition in global societies. Need of smart work practices comes from increasing employee turnover rate, global economic recession, unnecessary job pressure, increasing contingent workforce and advancement in technologies. Current practices are not enough elastic to tackle global changing work environment and organizational competitions. Current practices are causing many reciprocal problems among employee and organization mechanically. There is conscious understanding among business sectors smart work practices that will deal with new century challenges with addressing the concerns of relevant issues. It is aimed in this paper to endorse customized and smart work practice tools along knowledge framework to manage the growing concerns of employee engagement, use of technology, orgaization concerns and challenges for the business. This includes a Smart Management Information System to address necessary concerns of employees and combine with a framework to extract the best possible ways to allocate companies resources and re-align only required efforts to adopt the best possible strategy for controlling potential risks.Keywords: employees engagement, management information system, psychological pressure, current and future HR practices
Procedia PDF Downloads 1841561 Analysis of Organizational Hybrid Agile Methods Environments: Frameworks, Benefits, and Challenges
Authors: Majid Alsubaie, Hamed Sarbazhosseini
Abstract:
Many working environments have experienced increased uncertainty due to the fast-moving and unpredictable world. IT systems development projects, in particular, face several challenges because of their rapidly changing environments and emerging technologies. Information technology organizations within these contexts adapt systems development methodology and new software approaches to address this issue. One of these methodologies is the Agile method, which has gained huge attention in recent years. However, due to failure rates in IT projects, there is an increasing demand for the use of hybrid Agile methods among organizations. The scarce research in the area means that organizations do not have solid evidence-based knowledge for the use of hybrid Agile. This research was designed to provide further insights into the development of hybrid Agile methods within systems development projects, including how frameworks and processes are used and what benefits and challenges are gained and faced as a result of hybrid Agile methods. This paper presents how three organizations (two government and one private) use hybrid Agile methods in their Agile environments. The data was collected through interviews and a review of relevant documents. The results indicate that these organizations do not predominantly use pure Agile. Instead, they are waterfall organizations by virtue of systems nature and complexity, and Agile is used underneath as the delivery model. Prince2 Agile framework, SAFe, Scrum, and Kanban were the identified models and frameworks followed. This study also found that customer satisfaction and the ability to build quickly are the most frequently perceived benefits of using hybrid Agile methods. In addition, team resistance and scope changes are the common challenges identified by research participants in their working environments. The findings can help to understand Agile environmental conditions and projects that can help get better success rates and customer satisfaction.Keywords: agile, hybrid, IT systems, management, success rate, technology
Procedia PDF Downloads 1081560 The Prospect of Income Contingent Loan in Malaysia Higher Education Financing Using Deterministic and Stochastic Methods in Modelling Income
Authors: Syaza Isma, Timothy Higgins
Abstract:
In Malaysia, increased take-up rates of tertiary student borrowing, and reliance on retirement savings to fund children's education show the importance of public higher education financing schemes (PTPTN). PTPTN has been operating for 2 decades now; however, there are some critical issues and challenges that include low loan recovery and loan default that suggest a detailed consideration of student loan/financing scheme alternatives is crucial. In addition, the decline in funding level per student following introduction of the new PTPTN full and partial loan scheme has raised ongoing concerns over the sustainability of the scheme to provide continuous financial assistance to students in tertiary education. This research seeks to assess these issues that put greater efficiency in an effort to ensure equitable access to student funding for current and future generations. We explore the extent of repayment hardship under the current loan arrangements that presumably led to low recovery from the borrowers, particularly low-income graduates. The concept of manageable debt exists in the design of income-contingent repayment schemes, as practiced in Australia, New Zealand, UK, Hungary, USA (in limited form), the Netherlands, and South Korea. Can Income Contingent Loans (ICL) offer the best practice for an education financing scheme, and address the issue of repayment hardship and concurrently, can a properly designed ICL scheme provide a solution to the current issues and challenges facing Malaysia student financing? We examine the different potential ICL models using deterministic and stochastic approach to simulate income of graduates.Keywords: deterministic, income contingent loan, repayment burden, simulation, stochastic
Procedia PDF Downloads 2291559 Responding to the Mental Health Service Needs of Rural-to-Urban Migrant Workers in China: Current Situation and Future Directions
Authors: Yujun Liu, Maosheng Ran
Abstract:
Background: Chinese rural-to-urban migrant workers’ mental health problems raise attentions from different social sectors. However, situation of present mental health services provided to this population has not been discovered. This study attempts to describe the current mental health service situation, identify the gaps and give the future directions based on the quantitative data. Methods: Questionnaire surveys were conducted among 2017 rural-to-urban migrant workers in 13 cities and 100 social work service organizations in 5 cities in 2014. Data was collected by face-to-face structured interview by trained interviewers. Findings: Migrant workers’ mental health status was not good. Compared to the severity of mental distress, mental health service for this population was lacking and insufficient, which accounted for only 14.4% of all services in our sample. And the group work and case work were the most frequently-used methods. By estimating a series of regression models, we revealed that life experiences and working conditions were significantly associated with migrant workers’ mental health status. Therefore, the macro social work practices aimed at this whole group were advocated to promote their mental wellbeing. That is, practitioners should not only focus on the improvement of migrant workers’ emotion management capacity, but also pay attention to raise awareness and improve their living and working condition; not only concentrate on the solving of individuals’ dilemma, but also promote gradual reformation of present labor regime and hukou system in China.Keywords: Chinese rural-to-urban migrant workers, macro social work practice, mental health service needs, mental health status
Procedia PDF Downloads 2811558 Methylene Blue Removal Using NiO nanoparticles-Sand Adsorption Packed Bed
Authors: Nedal N. Marei, Nashaat Nassar
Abstract:
Many treatment techniques have been used to remove the soluble pollutants from wastewater as; dyes and metal ions which could be found in rich amount in the used water of the textile and tanneries industry. The effluents from these industries are complex, containing a wide variety of dyes and other contaminants, such as dispersants, acids, bases, salts, detergents, humectants, oxidants, and others. These techniques can be divided into physical, chemical, and biological methods. Adsorption has been developed as an efficient method for the removal of heavy metals from contaminated water and soil. It is now recognized as an effective method for the removal of both organic and inorganic pollutants from wastewaters. Nanosize materials are new functional materials, which offer high surface area and have come up as effective adsorbents. Nano alumina is one of the most important ceramic materials widely used as an electrical insulator, presenting exceptionally high resistance to chemical agents, as well as giving excellent performance as a catalyst for many chemical reactions, in microelectronic, membrane applications, and water and wastewater treatment. In this study, methylene blue (MB) dye has been used as model dye of textile wastewater in order to synthesize a synthetic MB wastewater. NiO nanoparticles were added in small percentage in the sand packed bed adsorption columns to remove the MB from the synthetic textile wastewater. Moreover, different parameters have been evaluated; flow of the synthetic wastewater, pH, height of the bed, percentage of the NiO to the sand in the packed material. Different mathematical models where employed to find the proper model which describe the experimental data and help to analyze the mechanism of the MB adsorption. This study will provide good understanding of the dyes adsorption using metal oxide nanoparticles in the classical sand bed.Keywords: adsorption, column, nanoparticles, methylene
Procedia PDF Downloads 2691557 Structural Model on Organizational Climate, Leadership Behavior and Organizational Commitment: Work Engagement of Private Secondary School Teachers in Davao City
Authors: Genevaive Melendres
Abstract:
School administrators face the reality of teachers losing their engagement, or schools losing the teachers. This study is then conducted to identify a structural model that best predict work engagement of private secondary teachers in Davao City. Ninety-three teachers from four sectarian schools and 56 teachers from four non-sectarian schools were involved in the completion of four survey instruments namely Organizational Climate Questionnaire, Leader Behavior Descriptive Questionnaire, Organizational Commitment Scales, and Utrecht Work Engagement Scales. Data were analyzed using frequency distribution, mean, standardized deviation, t-test for independent sample, Pearson r, stepwise multiple regression analysis, and structural equation modeling. Results show that schools have high level of organizational climate dimensions; leaders oftentimes show work-oriented and people-oriented behavior; teachers have high normative commitment and they are very often engaged at their work. Teachers from non-sectarian schools have higher organizational commitment than those from sectarian schools. Organizational climate and leadership behavior are positively related to and predict work engagement whereas commitment did not show any relationship. This study underscores the relative effects of three variables on the work engagement of teachers. After testing network of relationships and evaluating several models, a best-fitting model was found between leadership behavior and work engagement. The noteworthy findings suggest that principals pay attention and consistently evaluate their behavior for this best predicts the work engagement of the teachers. The study provides value to administrators who take decisions and create conditions in which teachers derive fulfillment.Keywords: leadership behavior, organizational climate, organizational commitment, private secondary school teachers, structural model on work engagement
Procedia PDF Downloads 2721556 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies
Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal
Abstract:
Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model
Procedia PDF Downloads 2261555 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 1811554 Biomechanical Performance of the Synovial Capsule of the Glenohumeral Joint with a BANKART Lesion through Finite Element Analysis
Authors: Duvert A. Puentes T., Javier A. Maldonado E., Ivan Quintero., Diego F. Villegas
Abstract:
Mechanical Computation is a great tool to study the performance of complex models. An example of it is the study of the human body structure. This paper took advantage of different types of software to make a 3D model of the glenohumeral joint and apply a finite element analysis. The main objective was to study the change in the biomechanical properties of the joint when it presents an injury. Specifically, a BANKART lesion, which consists in the detachment of the anteroinferior labrum from the glenoid. Stress and strain distribution of the soft tissues were the focus of this study. First, a 3D model was made of a joint without any pathology, as a control sample, using segmentation software for the bones with the support of medical imagery and a cadaveric model to represent the soft tissue. The joint was built to simulate a compression and external rotation test using CAD to prepare the model in the adequate position. When the healthy model was finished, it was submitted to a finite element analysis and the results were validated with experimental model data. With the validated model, it was sensitized to obtain the best mesh measurement. Finally, the geometry of the 3D model was changed to imitate a BANKART lesion. Then, the contact zone of the glenoid with the labrum was slightly separated simulating a tissue detachment. With this new geometry, the finite element analysis was applied again, and the results were compared with the control sample created initially. With the data gathered, this study can be used to improve understanding of the labrum tears. Nevertheless, it is important to remember that the computational analysis are approximations and the initial data was taken from an in vitro assay.Keywords: biomechanics, computational model, finite elements, glenohumeral joint, bankart lesion, labrum
Procedia PDF Downloads 1611553 Remodeling of Gut Microbiome of Pakistani Expats in China After Intermittent Fasting/Ramadan Fasting
Authors: Hafiz Arbab Sakandar
Abstract:
Time-restricted intermittent fasting (TRIF) impacts host’s physiology and health. Plenty of health benefits have been reported for TRIF in animal models. However, limited studies have been conducted on humans especially in underdeveloped economies. Here, we designed a study to investigate the impact of TRIF/Ramadan fasting (16:8) on the modulation of gut-microbiome structure, metabolic pathways, and predicted metabolites and explored the correlation among them at different time points (during and after the month of Ramadan) in Pakistani Expats living in China. We observed different trends of Shannon-Wiener index in different subjects; however, all subjects showed substantial change in bacterial diversity with the progression of TRIF. Moreover, the changes in gut microbial structure by the end of TRIF were higher vis-a-vis in the beginning, significant difference was observed among individuals. Additionally, metabolic pathways analysis revealed that amino acid, carbohydrate and energy metabolism, glycan biosynthesis metabolism of cofactors and vitamins were significantly affected by TRIF. Pyridoxamine, glutamate, citrulline, arachidonic acid, and short chain fatty acid showed substantial difference at different time points based on the predicted metabolism. In conclusion, these results contribute to further our understanding about the key relationship among, dietary intervention (TRIF), gut microbiome structure and function. The preliminary results from study demonstrate significant potential for elucidating the mechanisms underlying gut microbiome stability and enhancing the effectiveness of microbiome-tailored interventions among the Pakistani populace. Nonetheless, extensive, and rigorous large-scale research on the Pakistani population is necessary to expound on the association between diet, gut microbiome, and overall health.Keywords: gut microbiome, health, fasting, functionality
Procedia PDF Downloads 741552 Electronic Commerce in Georgia: Problems and Development Perspectives
Authors: Nika GorgoShadze, Anri Shainidze, Bachuki Katamadze
Abstract:
In parallel to the development of the digital economy in the world, electronic commerce is also widely developing. Internet and ICT (information and communication technology) have created new business models as well as promoted to market consolidation, sustainability of the business environment, creation of digital economy, facilitation of business and trade, business dynamism, higher competitiveness, etc. Electronic commerce involves internet technology which is sold via the internet. Nowadays electronic commerce is a field of business which is used by leading world brands very effectively. After the research of internet market in Georgia, it was found out that quality of internet is high in Tbilisi and is low in the regions. The internet market of Tbilisi can be evaluated as high-speed internet service, competitive and cost effective internet market. Development of electronic commerce in Georgia is connected with organizational and methodological as well as legal problems. First of all, a legal framework should be developed which will regulate responsibilities of organizations. The Ministry of Economy and Sustainable Development will play a crucial role in creating legal framework. Ministry of Justice will also be involved in this process as well as agency for data exchange. Measures should be taken in order to make electronic commerce in Georgia easier. Business companies may be offered some model to get low-cost and complex service. A service centre should be created which will provide all kinds of online-shopping. This will be a rather interesting innovation which will facilitate online-shopping in Georgia. Development of electronic business in Georgia requires modernized infrastructure of telecommunications (especially in the regions) as well as solution of institutional and socio-economic problems. Issues concerning internet availability and computer skills are also important.Keywords: electronic commerce, internet market, electronic business, information technology, information society, electronic systems
Procedia PDF Downloads 3841551 An Integrated Label Propagation Network for Structural Condition Assessment
Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong
Abstract:
Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation
Procedia PDF Downloads 95