Search results for: wind power density
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10042

Search results for: wind power density

772 The Effect of Empathy Training Given to Midwives on Mothers’ Satisfaction with Midwives and Their Birth Perception

Authors: Songul Aktas, Turkan Pasinlioglu, Kiymet Yesilcicek Calik

Abstract:

Introduction: Emphatic approach during labor increases both quality of care and birth satisfaction of mothers. Besides; maternal satisfaction statements and expressions about midwives who assist labor contribute to a positive birth perception and wish to give vaginal delivery again. Aim: The study aimed at investigating the effect of empathy training given to midwives on mothers’ satisfaction with midwives and their birth perception. Material/Method: This experimental study was undertaken between February 2013 and January 2014 at a public hospital in Trabzon Province. The population of the study was composed of mothers who gave vaginal delivery and the sample was composed of 222 mothers determined with power analyzes. Ethical approval and written informed consents were obtained. Mothers who were assisted by midwives during 1st, 2nd and 3rd phases of delivery and first two postpartum hours were included. Empathy training given to midwives included didactic narration, creative drama, psychodrama techniques and lasted 32 hours. The data were collected before the empathy training (BET), right after empathy training (RAET) and 8 weeks later after birth (8WLAB). Mothers were homogenous in terms of socio-demographic, obstetric characteristics. Data were collected with a questionnaire and were analyzed with Chi-square tests. Findings: Rate of mother’s satisfaction with midwives was 36.5% in BET, 81.1% in RAET and 75.7% in 8WLAB. Key mother’s satisfaction with midwives were as follows: 27.6% of mothers told that midwives were “smiling-kind” in BET, 39.6% of them in RAET and 33.7% of them in 8WLAB; 31% of mothers told that midwives were “understanding” in BET, 38.2% of them in RAET and 33.7% of them in 8WLAB; 15.7% of mothers told that midwives were “reassuring” in BET, 44.9% of them in RAET and 39.3% of them in 8WLAB;19.5% of mothers told that midwives were “encouraging and motivating” in BET, 39.8% of them in RAET and 19.8% of mothers told that midwives were “informative” in BET, 45.6% of them in RAET and 35.1% of them in 8WLAB (p<0.05). Key mother’s dissatisfaction with midwives were as follows: 55.3% of mothers told that midwives were “poorly-informed” in BET, 17% of them in RAET and 27.7% of them in 8WLAB; 56.9% of mothers told that midwives were “poorly-listening” in BET, 17.6% of them in RAET and 25.5% of them in 8WLAB; 53.2% of mothers told that midwives were “judgmental-embarrassing” in BET, 17% of them in RAET and 29.8% of them in 8WLAB; 56.2% of mothers told that midwives had “fierce facial expressions” in BET, 15.6% of them in RAET and 28.1% of them in 8WLAB. Rates of mothers’ perception that labor was “easy” were 8.1% in BET, 21.6% in RAET and 13.5% in 8WLAB and rates of mothers’ perception that labor was “very difficult and tiring” were 41.9% in BET, 5.4% in RAET and 13.5% in 8WLAB (p<0.05). Conclusion: The effect of empathy training given to midwives upon statements that described mothers’ satisfaction with midwives and their birth perception was positive. Note: This study was financially funded by TUBİTAK project with number 113S672.

Keywords: empathy training, labor perception, mother’s satisfaction with midwife, vaginal delivery

Procedia PDF Downloads 373
771 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems

Authors: Zahid Ullah, Atlas Khan

Abstract:

This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.

Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms

Procedia PDF Downloads 113
770 Effect of Phytohormones on the Development and Nutraceutical Characteristics of the Fruit Capsicum annuum

Authors: Rossy G. Olan Villegas, Gerardo Acosta Garcia, Aurea Bernardino Nicanor, Leopoldo Gonzalez Cruz, Humberto Ramirez Medina

Abstract:

Capsicum annuum is a crop of agricultural and economic importance in Mexico and other countries. The fruit (pepper) contains bioactive components such as carotenoids, phenolic compounds and capsaicinoids that improve health. However, pepper cultivation is affected by biotic and abiotic factors that decrease yield. Some phytohormones like gibberellins and auxins induce the formation and development of fruit in several plants. In this study, we evaluated the effect of the exogenous application of phytohormones like gibberellic acid and indolbutyric acid on fruit development of jalapeno pepper plants, the protein profile of plant tissues, the accumulation of bioactive compounds and antioxidant activity in the pericarp and seeds. For that, plants were sprinkled with these phytohormones. The fruit collection for the control, indolbutyric acid and gibberellic acid treatments was 7 peppers per plant; however, for the treatment that combines indolbutyric acid and gibberellic acid, a fruit with the shortest length (1.52 ± 1.00 cm) and weight (0.41 ± 1.0 g) was collected compared to fruits of plants grown under other treatments. The length (4,179 ± 0,130 cm) and weight of the fruit (8,949 ± 0.583 g) increased in plants treated with indolbutyric acid, but these characteristics decreased with the application of GA3 (length of 3,349 ± 0.127 cm and a weight 4,429 ± 0.144 g). The content of carotenes and phenolic compounds increased in plants treated with GA3 (1,733 ± 0.092 and 1,449 ± 0.009 mg / g, respectively) or indolbutyric acid (1,164 ± 0.042 and 0.970 ± 0.003 mg / g). However, this effect was not observed in plants treated with both phytohormones (0.238 ± 0.021 and 0.218 ± 0.004 mg / g). Capsaicin content was higher in all treatments; but it was more noticeable in plants treated with both phytohormones, the value being 0.913 ± 0.001 mg / g (three times greater in amount). The antioxidant activity was measured by 3 different assays, 2,2-diphenyl-1-picrylhydrazyl (DPPH), antioxidant power of ferric reduction (FRAP) and 2,2'-Azinobis-3-ethyl-benzothiazoline-6-sulfonic acid ( ABTS) to find the minimum inhibitory concentration of the reducing radical (IC50 and EC50). Significant differences were observed from the application of the phytohormone, being the fruits treated with gibberellins, which had a greater accumulation of bioactive compounds. Our results suggest that the application of phytohormones modifies the development of fruit and its content of bioactive compounds.

Keywords: auxins, capsaicinoids, carotenoids, gibberellins

Procedia PDF Downloads 116
769 Inverse Saturable Absorption in Non-linear Amplifying Loop Mirror Mode-Locked Fiber Laser

Authors: Haobin Zheng, Xiang Zhang, Yong Shen, Hongxin Zou

Abstract:

The research focuses on mode-locked fiber lasers with a non-linear amplifying loop mirror (NALM). Although these lasers have shown potential, they still have limitations in terms of low repetition rate. The self-starting of mode-locking in NALM is influenced by the cross-phase modulation (XPM) effect, which has not been thoroughly studied. The aim of this study is two-fold. First, to overcome the difficulties associated with increasing the repetition rate in mode-locked fiber lasers with NALM. Second, to analyze the influence of XPM on self-starting of mode-locking. The power distributions of two counterpropagating beams in the NALM and the differential non-linear phase shift (NPS) accumulations are calculated. The analysis is conducted from the perspective of NPS accumulation. The differential NPSs for continuous wave (CW) light and pulses in the fiber loop are compared to understand the inverse saturable absorption (ISA) mechanism during pulse formation in NALM. The study reveals a difference in differential NPSs between CW light and pulses in the fiber loop in NALM. This difference leads to an ISA mechanism, which has not been extensively studied in artificial saturable absorbers. The ISA in NALM provides an explanation for experimentally observed phenomena, such as active mode-locking initiation through tapping the fiber or fine-tuning light polarization. These findings have important implications for optimizing the design of NALM and reducing the self-starting threshold of high-repetition-rate mode-locked fiber lasers. This study contributes to the theoretical understanding of NALM mode-locked fiber lasers by exploring the ISA mechanism and its impact on self-starting of mode-locking. The research fills a gap in the existing knowledge regarding the XPM effect in NALM and its role in pulse formation. This study provides insights into the ISA mechanism in NALM mode-locked fiber lasers and its role in selfstarting of mode-locking. The findings contribute to the optimization of NALM design and the reduction of self-starting threshold, which are essential for achieving high-repetition-rate operation in fiber lasers. Further research in this area can lead to advancements in the field of mode-locked fiber lasers with NALM.

Keywords: inverse saturable absorption, NALM, mode-locking, non-linear phase shift

Procedia PDF Downloads 101
768 Development of a PJWF Cleaning Method for Wet Electrostatic Precipitators

Authors: Hsueh-Hsing Lu, Thi-Cuc Le, Tung-Sheng Tsai, Chuen-Jinn Tsai

Abstract:

This study designed and tested a novel wet electrostatic precipitators (WEP) system featuring a Pulse-Air-Jet-Assisted Water Flow (PJWF) to shorten water cleaning time, reduce water usage, and maintain high particle removal efficiency. The PJWF injected cleaning water tangentially at the cylinder wall, rapidly enhancing the momentum of the water flow for efficient dust cake removal. Each PJWF cycle uses approximately 4.8 liters of cleaning water in 18 seconds. Comprehensive laboratory tests were conducted using a single-tube WEP prototype within a flow rate range of 3.0 to 6.0 cubic meters per minute(CMM), operating voltages between -35 to -55 kV, and high-frequency power supply. The prototype, consisting of 72 sets of double-spike rigid discharge electrodes, demonstrated that with the PJWF, -35 kV, and 3.0 CMM, the PM2.5 collection efficiency remained as high as the initial value of 88.02±0.92% after loading with Al2O3 particles at 35.75± 2.54 mg/Nm3 for 20-hr continuous operation. In contrast, without the PJWF, the PM2.5 collection efficiency drastically dropped from 87.4% to 53.5%. Theoretical modeling closely matched experimental results, confirming the robustness of the system's design and its scalability for larger industrial applications. Future research will focus on optimizing the PJWF system, exploring its performance with various particulate matter, and ensuring long-term operational stability and reliability under diverse environmental conditions. Recently, this WEP was combined with a preceding CT (cooling tower) and a HWS (honeycomb wet scrubber) and pilot-tested (40 CMM) to remove SO2 and PM2.5 emissions in a sintering plant of an integrated steel making plant. Pilot-test results showed that the removal efficiencies for SO2 and PM2.5 emissions are as high as 99.7 and 99.3 %, respectively, with ultralow emitted concentrations of 0.3 ppm and 0.07 mg/m3, respectively, while the white smoke is also eliminated at the same time. These new technologies are being used in the industry and the application in different fields is expected to be expanded to reduce air pollutant emissions substantially for a better ambient air quality.

Keywords: wet electrostatic precipitator, pulse-air-jet-assisted water flow, particle removal efficiency, air pollution control

Procedia PDF Downloads 25
767 Evaluating the Performance of Passive Direct Methanol Fuel Cell under Varying Operating and Structural Conditions

Authors: Rahul Saraswat

Abstract:

More recently, a focus has been given to replacing machined stainless steel metal flow fields with inexpensive wire mesh current collectors. The flow fields are based on simple woven wire mesh screens of various stainless steels, which are sandwiched between a thin metal plate of the same material to create a bipolar plate/flow field configuration for use in a stack. Major advantages of using stainless steel wire screens include the elimination of expensive raw materials as well as machining and/or other special fabrication costs. The objective of the project is to improve the performance of the passive direct methanol fuel cell without increasing the cost of the cell and to make it as compact and light as possible. From the literature survey, it was found that very little is done in this direction, and the following methodology was used. 1. The passive direct methanol fuel cell (DMFC) can be made more compact, lighter, and less costly by changing the material used in its construction. 2. Controlling the fuel diffusion rate through the cell improves the performance of the cell. A passive liquid feed direct methanol fuel cell (DMFC) was fabricated using a given MEA (Membrane Electrode Assembly) and tested for different current collector structures. Mesh current collectors of different mesh densities along with different support structures, were used, and the performance was found to be better. Methanol concentration was also varied. Optimisation of mesh size, support structure, and fuel concentration was achieved. Cost analysis was also performed hereby. From the performance analysis study of DMFC, we can conclude with the following points: Area specific resistance (ASR) of wire mesh current collectors is lower than the ASR of stainless steel current collectors. Also, the power produced by wire mesh current collectors is always more than that produced by stainless steel current collectors. 1. Low or moderate methanol concentrations should be used for better and stable DMFC performance. 2. Wiremesh is a good substitute for stainless steel for current collector plates of passive DMFC because of its lower cost (by about 27 %), flexibility, and light in weight characteristics of wire mesh.

Keywords: direct methanol fuel cell, membrane electrode assembly, mesh, mesh size, methanol concentration, support structure

Procedia PDF Downloads 80
766 Assessing Circularity Potentials and Customer Education to Drive Ecologically and Economically Effective Materials Design for Circular Economy - A Case Study

Authors: Mateusz Wielopolski, Asia Guerreschi

Abstract:

Circular Economy, as the counterargument to the ‘make-take-dispose’ linear model, is an approach that includes a variety of schools of thought looking at environmental, economic, and social sustainability. This, in turn, leads to a variety of strategies and often confusion when it comes to choosing the right one to make a circular transition as effective as possible. Due to the close interplay of circular product design, business model and social responsibility, companies often struggle to develop strategies that comply with all three triple-bottom-line criteria. Hence, to transition to circularity effectively, product design approaches must become more inclusive. In a case study conducted with the University of Bayreuth and the ISPO, we correlated aspects of material choice in product design, labeling and technological innovation with customer preferences and education about specific material and technology features. The study revealed those attributes of the consumers’ environmental awareness that directly translate into an increase of purchase power - primarily connected with individual preferences regarding sports activity and technical knowledge. Based on this outcome, we constituted a product development approach that incorporates the consumers’ individual preferences towards sustainable product features as well as their awareness about materials and technology. It allows deploying targeted customer education campaigns to raise the willingness to pay for sustainability. Next, we implemented the customer preference and education analysis into a circularity assessment tool that takes into account inherent company assets as well as subjective parameters like customer awareness. The outcome is a detailed but not cumbersome scoring system, which provides guidance for material and technology choices for circular product design while considering business model and communication strategy to the attentive customers. By including customer knowledge and complying with corresponding labels, companies develop more effective circular design strategies, while simultaneously increasing customers’ trust and loyalty.

Keywords: circularity, sustainability, product design, material choice, education, awareness, willingness to pay

Procedia PDF Downloads 201
765 Experimental Analysis on Heat Transfer Enhancement in Double Pipe Heat Exchanger Using Al2O3/Water Nanofluid and Baffled Twisted Tape Inserts

Authors: Ratheesh Radhakrishnan, P. C. Sreekumar, K. Krishnamoorthy

Abstract:

Heat transfer augmentation techniques ultimately results in the reduction of thermal resistance in a conventional heat exchanger by generating higher convective heat transfer coefficient. It also results in reduction of size, increase in heat duty, decrease in approach temperature difference and reduction in pumping power requirements for heat exchangers. Present study deals with compound augmentation technique, which is not widely used. The study deals with the use of Alumina (Al2O3)/water nanofluid and baffled twisted tape inserts in double pipe heat exchanger as compound augmentation technique. Experiments were conducted to evaluate the heat transfer coefficient and friction factor for the flow through the inner tube of heat exchanger in turbulent flow range (8000Keywords: enhancement, heat transfer coefficient, friction factor, twisted tape, nanofluid

Procedia PDF Downloads 350
764 Treating Complex Pain and Addictions with Bioelectrode Therapy: An Acupuncture Point Stimulus Method for Relieving Human Suffering

Authors: Les Moncrieff

Abstract:

In a world awash with potent opioids flaming an international crisis, the need to explore safe alternatives has never been more urgent. Bio-electrode Therapy is a novel adjunctive treatment method for relieving acute opioid withdrawal symptoms and many types of complex acute and chronic pain (often the underlying cause of opioid dependence). By combining the science of developmental bioelectricity with Traditional Chinese Medicine’s theory of meridians, rapid relief from pain is routinely being achieved in the clinical setting. Human body functions are dependent on electrical factors, and acupuncture points on the body are known to have higher electrical conductivity than surrounding skin tissue. When tiny gold- and silver-plated electrodes are secured to the skin at specific acupuncture points using established Chinese Medicine principles and protocols, an enhanced microcurrent and electrical field are created between the electrodes, influencing the entire meridian and connecting meridians. No external power source or electrical devices are required. Endogenous DC electric fields are an essential fundamental component for development, regeneration, and wound healing. Disruptions in the normal ion-charge in the meridians and circulation of blood will manifest as pain and development of disease. With the application of these simple electrodes (gold acting as cathode and silver as anode) according to protocols, the resulting microcurrent is directed along the selected meridians to target injured or diseased organs and tissues. When injured or diseased cells have been stimulated by the microcurrent and electrical fields, the permeability of the cell membrane is affected, resulting in an immediate relief of pain, a rapid balancing of positive and negative ions (sodium, potassium, etc.) in the cells, the restoration of intracellular fluid levels, replenishment of electrolyte levels, pH balance, removal of toxins, and a re-establishment of homeostasis.

Keywords: bioelectricity, electrodes, electrical fields, acupuncture meridians, complex pain, opioid withdrawal management

Procedia PDF Downloads 82
763 Polysaccharide Polyelectrolyte Complexation: An Engineering Strategy for the Development of Commercially Viable Sustainable Materials

Authors: Jeffrey M. Catchmark, Parisa Nazema, Caini Chen, Wei-Shu Lin

Abstract:

Sustainable and environmentally compatible materials are needed for a wide variety of volume commercial applications. Current synthetic materials such as plastics, fluorochemicals (such as PFAS), adhesives and resins in form of sheets, laminates, coatings, foams, fibers, molded parts and composites are used for countless products such as packaging, food handling, textiles, biomedical, construction, automotive and general consumer devices. Synthetic materials offer distinct performance advantages including stability, durability and low cost. These attributes are associated with the physical and chemical properties of these materials that, once formed, can be resistant to water, oils, solvents, harsh chemicals, salt, temperature, impact, wear and microbial degradation. These advantages become disadvantages when considering the end of life of these products which generate significant land and water pollution when disposed of and few are recycled. Agriculturally and biologically derived polymers offer the potential of remediating these environmental and life-cycle difficulties, but face numerous challenges including feedstock supply, scalability, performance and cost. Such polymers include microbial biopolymers like polyhydroxyalkanoates and polyhydroxbutirate; polymers produced using biomonomer chemical synthesis like polylactic acid; proteins like soy, collagen and casein; lipids like waxes; and polysaccharides like cellulose and starch. Although these materials, and combinations thereof, exhibit the potential for meeting some of the performance needs of various commercial applications, only cellulose and starch have both the production feedstock volume and cost to compete with petroleum derived materials. Over 430 million tons of plastic is produced each year and plastics like low density polyethylene cost ~$1500 to $1800 per ton. Over 400 million tons of cellulose and over 100 million tons of starch are produced each year at a volume cost as low as ~$500 to $1000 per ton with the capability of increased production. Cellulose and starches, however, are hydroscopic materials that do not exhibit the needed performance in most applications. Celluloses and starches can be chemically modified to contain positive and negative surface charges and such modified versions of these are used in papermaking, foods and cosmetics. Although these modified polysaccharides exhibit the same performance limitations, recent research has shown that composite materials comprised of cationic and anionic polysaccharides in polyelectrolyte complexation exhibit significantly improved performance including stability in diverse environments. Moreover, starches with added plasticizers can exhibit thermoplasticity, presenting the possibility of improved thermoplastic starches when comprised of starches in polyelectrolyte complexation. In this work, the potential for numerous volume commercial products based on polysaccharide polyelectrolyte complexes (PPCs) will be discussed, including the engineering design strategy used to develop them. Research results will be detailed including the development and demonstration of starch PPC compositions for paper coatings to replace PFAS; adhesives; foams for packaging, insulation and biomedical applications; and thermoplastic starches. In addition, efforts to demonstrate the potential for volume manufacturing with industrial partners will be discussed.

Keywords: biomaterials engineering, commercial materials, polysaccharides, sustainable materials

Procedia PDF Downloads 18
762 Experience Marketing and Behavioral Intentions: An Exploratory Study Applied to Middle-Aged and Senior Pickleball Participated in Taiwan

Authors: Yi Yau, Chia-Huei Hsiao

Abstract:

The elderly society is already a problem of globalization, and Taiwan will enter a super-aged society in 2025. Therefore, how to improve the health of the elderly and reduce the government's social burden is an important issue at present. Exercise is the best medical care, and it is also a healthy activity for people to live a healthy life. Facing the super-aged society in the future, it is necessary to attract them to participate in sports voluntarily through sports promotion so that they can live healthy and independent lives and continue to participate in society to enhance the well-being of the elderly. Experiential marketing and sports participation are closely related. In the past, it was mainly aimed at consumer behavior at the commercial level. At present, there are not many study objects focusing on participant behavior and middle-aged and elderly people. Therefore, this study takes the news emerged sport-Pickleball that has been loved by silver-haired people in recent years as the research sport. It uses questionnaire surveys and intentional sampling methods. The purpose of the group is to understand the middle-aged and elderly people’s experience and behavior patterns of Pickleball, explore the relationship between experiential marketing and participants' intentional behaviors, and predict which aspects of experiential marketing will affect their intentional behaviors. The findings showed that experience marketing is highly positively correlated with behavioral intentions, and experience marketing has a positive predictive power for behavioral intentions. Among them, "ACT" and "SENSE" are predictive variables that effectively predict behavioral intentions. This study proves the feasibility of pickleball for middle-aged and senior sports. It is recommended that in the future curriculum planning, try to simplify the exercise steps, increase the chances of contact with the sphere, and enhance the sensory experience to enhance the sense of success during exercise, and then generate exercise motivation, and ultimately change the exercise mode or habits and promote health.

Keywords: newly emerged sports, middle age and elderly, health promotion, ACT, SENSE

Procedia PDF Downloads 157
761 A Low-Cost Long-Range 60 GHz Backhaul Wireless Communication System

Authors: Atabak Rashidian

Abstract:

In duplex backhaul wireless communication systems, two separate transmit and receive high-gain antennas are required if an antenna switch is not implemented. Although the switch loss, which is considerable and in the order of 1.5 dB at 60 GHz, is avoided, the large separate antenna systems make the design bulky and not cost-effective. To avoid two large reflectors for such a system, transmit and receive antenna feeds with a common phase center are required. The phase center should coincide with the focal point of the reflector to maximize the efficiency and gain. In this work, we present an ultra-compact design in which stacked patch antennas are used as the feeds for a 12-inch reflector. The transmit antenna is a 1 × 2 array and the receive antenna is a single element located in the middle of the transmit antenna elements. Antenna elements are designed as stacked patches to provide the required impedance bandwidth for four standard channels of WiGigTM applications. The design includes three metallic layers and three dielectric layers, in which the top dielectric layer is a 100 µm-thick protective layer. The top two metallic layers are specified to the main and parasitic patches. The bottom layer is basically ground plane with two circular openings (0.7 mm in diameter) having a center through via which connects the antennas to a single input/output Si-Ge Bi-CMOS transceiver chip. The reflection coefficient of the stacked patch antenna is fully investigated. The -10 dB impedance bandwidth is about 11%. Although the gap between transmit and receive antenna is very small (g = 0.525 mm), the mutual coupling is less than -12 dB over the desired frequency band. The three dimensional radiation patterns of the transmit and receive reflector antennas at 60 GHz is investigated over the impedance bandwidth. About 39 dBi realized gain is achieved. Considering over 15 dBm of output power of the silicon chip in the transmit side, the EIRP should be over 54 dBm, which is good enough for over one kilometer multi Gbps data communications. The performance of the reflector antenna over the bandwidth shows the peak gain is 39 dBi and 40 dBi for the reflector antenna with 2-element and single element feed, respectively. This type of the system design is cost-effective and efficient.

Keywords: Antenna, integrated circuit, millimeter-wave, phase center

Procedia PDF Downloads 123
760 Bifurcations of the Rotations in the Thermocapillary Flows

Authors: V. Batishchev, V. Getman

Abstract:

We study the self-similar fluid flows in the Marangoni layers with the axial symmetry. Such flows are induced by the radial gradients of the temperatures whose distributions along the free boundary obey some power law. The self-similar solutions describe thermo-capillar flows both in the thin layers and in the case of infinite thickness. We consider both positive and negative temperature gradients. In the former case the cooling of free boundary nearby the axis of symmetry gives rise to the rotation of fluid. The rotating flow concentrates itself inside the Marangoni layer while outside of it the fluid does not revolve. In the latter case we observe no rotating flows at all. In the layers of infinite thickness the separation of the rotating flow creates two zones where the flows are directed oppositely. Both the longitudinal velocity and the temperature have exactly one critical point inside the boundary layer. It is worth to note that the profiles are monotonic in the case of non-swirling flows. We describe the flow outside the boundary layer with the use of self-similar solution of the Euler equations. This flow is slow and non-swirling. The introducing of an outer flow gives rise to the branching of swirling flows from the non-swirling ones. There is such the critical velocity of the outer flow that a non-swirling flow exists for supercritical velocities and cannot be extended to the sub-critical velocities. For the positive temperature gradients there are two non-swirling flows. For the negative temperature gradients the non-swirling flow is unique. We determine the critical velocity of the outer flow for which the branching of the swirling flows happens. In the case of a thin layer confined within free boundaries we show that the cooling of the free boundaries near the axis of symmetry leads to the separating of the layer and creates two sub-layers with opposite rotations inside. This makes sharp contrast with the case of infinite thickness. We show that such rotation arises provided the thickness of the layer exceed some critical value. In the case of a thin layer confined within free and rigid boundaries we construct the branching equation and the asymptotic approximation for the secondary swirling flows near the bifurcation point. It turns out that the bifurcation gives rise to one pair of the secondary swirling flows with different directions of swirl.

Keywords: free surface, rotation, fluid flow, bifurcation, boundary layer, Marangoni layer

Procedia PDF Downloads 345
759 Comfort Sensor Using Fuzzy Logic and Arduino

Authors: Samuel John, S. Sharanya

Abstract:

Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.

Keywords: arduino, DHT11, soft sensor, sugeno

Procedia PDF Downloads 314
758 Miniature Fast Steering Mirrors for Space Optical Communication on NanoSats and CubeSats

Authors: Sylvain Chardon, Timotéo Payre, Hugo Grardel, Yann Quentel, Mathieu Thomachot, Gérald Aigouy, Frank Claeyssen

Abstract:

With the increasing digitalization of society, access to data has become vital and strategic for individuals and nations. In this context, the number of satellite constellation projects is growing drastically worldwide and is a next-generation challenge of the New Space industry. So far, existing satellite constellations have been using radio frequencies (RF) for satellite-to-ground communications, inter-satellite communications, and feeder link communication. However, RF has several limitations, such as limited bandwidth and low protection level. To address these limitations, space optical communication will be the new trend, addressing both very high-speed and secured encrypted communication. Fast Steering Mirrors (FSM) are key components used in optical communication as well as space imagery and for a large field of functions such as Point Ahead Mechanisms (PAM), Raster Scanning, Beam Steering Mirrors (BSM), Fine Pointing Mechanisms (FPM) and Line of Sight stabilization (LOS). The main challenges of space FSM development for optical communication are to propose both a technology and a supply chain relevant for high quantities New Space approach, which requires secured connectivity for high-speed internet, Earth planet observation and monitoring, and mobility applications. CTEC proposes a mini-FSM technology offering a stroke of +/-6 mrad and a resonant frequency of 1700 Hz, with a mass of 50 gr. This FSM mechanism is a good candidate for giant constellations and all applications on board NanoSats and CubeSats, featuring a very high level of miniaturization and optimized for New Space high quantities cost efficiency. The use of piezo actuators offers a high resonance frequency for optimal control, with almost zero power consumption in step and stay pointing, and with very high-reliability figures > 0,995 demonstrated over years of recurrent manufacturing for Optronics applications at CTEC.

Keywords: fast steering mirror, feeder link, line of sight stabilization, optical communication, pointing ahead mechanism, raster scan

Procedia PDF Downloads 80
757 Sustainable Development Change within Our Environs

Authors: Akinwale Adeyinka

Abstract:

Critical natural resources such as clean ground water, fertile topsoil, and biodiversity are diminishing at an exponential rate, orders of magnitude above that at which they can be regenerated. Based on news on world population record, over 6 billion people on earth, and almost a quarter million added each day, the scale of human activity and environmental impact is unprecedented. Soaring human population growth over the past century has created a visible challenge to earth’s life support systems. In addition, the world faces an onslaught of other environmental threats including degenerated global climate change, global warming, intensified acid rain, stratospheric ozone depletion and health threatening pollution. Overpopulation and the use of deleterious technologies combine to increase the scale of human activities to a level that underlies these entire problems. These intensifying trends cannot continue indefinitely, hopefully, through increased understanding and valuation of ecosystems and their services, earth’s basic life-support system will be protected for the future.To say the fact, human civilization is now the dominant cause of change in the global environment. Now that our relationship to the earth has change so utterly, we have to see that change and understand its implication. These are actually 2 aspects to the challenges which we should believe. The first is to realize that our power to harm the earth can indeed have global and even permanent effects. Second is to realize that the only way to understand our new role as a co-architect of nature is to see ourselves as part of a complex system that does operate according to the same simple rules of cause and effect we are used to. So understanding the physical/biological dimension of earth system is an important precondition for making sensible policy to protect our environment. Because we believe Sustainable Development Is a matter of reconciling respect for the environment, social equity and economic profitability. Also, we strongly believe that environmental protection is naturally about reducing air and water pollution, but it also includes the improvement of the environmental performance of existing process. That is why we should always have it at the heart of our business that the environmental problem is not our effect on the environment so much as our relationship with the environment. We should always think of being environmental friendly in our operation.

Keywords: Stratospheric ozone depletion ion , Climate Change, global warming, social equity and economic profitability

Procedia PDF Downloads 337
756 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 90
755 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 125
754 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars

Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic

Abstract:

Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.

Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing

Procedia PDF Downloads 360
753 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 342
752 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.

Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design

Procedia PDF Downloads 213
751 Scale-Up Study of Gas-Liquid Two Phase Flow in Downcomer

Authors: Jayanth Abishek Subramanian, Ramin Dabirian, Ilias Gavrielatos, Ram Mohan, Ovadia Shoham

Abstract:

Downcomers are important conduits for multiphase flow transfer from offshore platforms to the seabed. Uncertainty in the predictions of the pressure drop of multiphase flow between platforms is often dominated by the uncertainty associated with the prediction of holdup and pressure drop in the downcomer. The objectives of this study are to conduct experimental and theoretical scale-up study of the downcomer. A 4-in. diameter vertical test section was designed and constructed to study two-phase flow in downcomer. The facility is equipped with baffles for flow area restriction, enabling interchangeable annular slot openings between 30% and 61.7%. Also, state-of-the-art instrumentation, the capacitance Wire-Mesh Sensor (WMS) was utilized to acquire the experimental data. A total of 76 experimental data points were acquired, including falling film under 30% and 61.7% annular slot opening for air-water and air-Conosol C200 oil cases as well as gas carry-under for 30% and 61.7% opening utilizing air-Conosol C200 oil. For all experiments, the parameters such as falling film thickness and velocity, entrained liquid holdup in the core, gas void fraction profiles at the cross-sectional area of the liquid column, the void fraction and the gas carry under were measured. The experimental results indicated that the film thickness and film velocity increase as the flow area reduces. Also, the increase in film velocity increases the gas entrainment process. Furthermore, the results confirmed that the increase of gas entrainment for the same liquid flow rate leads to an increase in the gas carry-under. A power comparison method was developed to enable evaluation of the Lopez (2011) model, which was created for full bore downcomer, with the novel scale-up experiment data acquired from the downcomer with the restricted area for flow. Comparison between the experimental data and the model predictions shows a maximum absolute average discrepancy of 22.9% and 21.8% for the falling film thickness and velocity, respectively; and a maximum absolute average discrepancy of 22.2% for fraction of gas carried with the liquid (oil).

Keywords: two phase flow, falling film, downcomer, wire-mesh sensor

Procedia PDF Downloads 167
750 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows

Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham

Abstract:

In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.

Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis

Procedia PDF Downloads 66
749 Influence of Atmospheric Pollutants on Child Respiratory Disease in Cartagena De Indias, Colombia

Authors: Jose A. Alvarez Aldegunde, Adrian Fernandez Sanchez, Matthew D. Menden, Bernardo Vila Rodriguez

Abstract:

Up to five statistical pre-processings have been carried out considering the pollutant records of the stations present in Cartagena de Indias, Colombia, also taking into account the childhood asthma incidence surveys conducted in hospitals in the city by the Health Ministry of Colombia for this study. These pre-processings have consisted of different techniques such as the determination of the quality of data collection, determination of the quality of the registration network, identification and debugging of errors in data collection, completion of missing data and purified data, as well as the improvement of the time scale of records. The characterization of the quality of the data has been conducted by means of density analysis of the pollutant registration stations using ArcGis Software and through mass balance techniques, making it possible to determine inconsistencies in the records relating the registration data between stations following the linear regression. The results obtained in this process have highlighted the positive quality in the pollutant registration process. Consequently, debugging of errors has allowed us to identify certain data as statistically non-significant in the incidence and series of contamination. This data, together with certain missing records in the series recorded by the measuring stations, have been completed by statistical imputation equations. Following the application of these prior processes, the basic series of incidence data for respiratory disease and pollutant records have allowed the characterization of the influence of pollutants on respiratory diseases such as, for example, childhood asthma. This characterization has been carried out using statistical correlation methods, including visual correlation, simple linear regression correlation and spectral analysis with PAST Software which identifies maximum periodicity cycles and minimums under the formula of the Lomb periodgram. In relation to part of the results obtained, up to eleven maximums and minimums considered contemporary between the incidence records and the particles have been identified taking into account the visual comparison. The spectral analyses that have been performed on the incidence and the PM2.5 have returned a series of similar maximum periods in both registers, which are at a maximum during a period of one year and another every 25 days (0.9 and 0.07 years). The bivariate analysis has managed to characterize the variable "Daily Vehicular Flow" in the ninth position of importance of a total of 55 variables. However, the statistical correlation has not obtained a favorable result, having obtained a low value of the R2 coefficient. The series of analyses conducted has demonstrated the importance of the influence of pollutants such as PM2.5 in the development of childhood asthma in Cartagena. The quantification of the influence of the variables has been able to determine that there is a 56% probability of dependence between PM2.5 and childhood respiratory asthma in Cartagena. Considering this justification, the study could be completed through the application of the BenMap Software, throwing a series of spatial results of interpolated values of the pollutant contamination records that exceeded the established legal limits (represented by homogeneous units up to the neighborhood level) and results of the impact on the exacerbation of pediatric asthma. As a final result, an economic estimate (in Colombian Pesos) of the monthly and individual savings derived from the percentage reduction of the influence of pollutants in relation to visits to the Hospital Emergency Room due to asthma exacerbation in pediatric patients has been granted.

Keywords: Asthma Incidence, BenMap, PM2.5, Statistical Analysis

Procedia PDF Downloads 117
748 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System

Authors: Hassan Qandil

Abstract:

Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.

Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar

Procedia PDF Downloads 155
747 Teacher's Professional Burnout and Its Relationship with the Power of Self-Efficacy and Perceived Stress

Authors: Vilma Zydziunaite, Ausra Rutkiene

Abstract:

In modern society, problems related to the teacher's personality, mental and physical health, teacher's emotions and competencies are becoming more and more relevant. In Lithuania, compared to other European countries, teachers experience specific difficulties at work: they have to work in conditions of constant reforms and changes and face growing competition due to the decrease in students and schools. Professional burnout, teacher’s self-efficacy and perceived stress are interrelated personally and/or organisationally. So, the relationship between teachers' professional burnout, self-efficacy, and perceived stress in the school environment seems to be a relatively underresearched area in Lithuania. The research aim was to reveal and characterize teacher burnout, self-efficacy, and perceived stress in the Lithuanian school context. The quantitative research design with a questioning survey was chosen for the study. The sample size consisted of 427 Lithuanian teachers. Research results revealed the highest scores for exhaustion and the lowest for cynicism; at a time when the teacher experiences professional burnout, cynicism is observed as the weakest characteristic; no significant differences were found according to educational level work experience; significant differences were identified according to age for exhaustion and overall burnout level among teachers; the most of teachers in Lithuanian sample perceive the moderate stress level in school environment; overall burnout has a significant correlation with self-efficacy and stress among Lithuanian teachers. This study has empirical and practical implications: it is relevant to study the problems of teacher's professional burnout, stress, and self-efficacy in connection with contextual qualitative variables and specify the interrelationships between variables in order to be able to identify specific problems and provide empirical evidence to practically solve them. From a practical point of view, the results show that the socio-emotional state of teachers should not be dismissed as an insignificant aspect. Therefore, the school administration must make efforts to develop a positive school climate that supports the socio-emotional state of the teacher. At the same time, school administration must pay great attention to the development of teachers' socio-emotional competencies without ignoring their importance in the teacher's professional life.

Keywords: Lithuania, perceived stress, professional burnout, self-efficacy, teacher

Procedia PDF Downloads 53
746 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 138
745 Selection of Strategic Suppliers for Partnership: A Model with Two Stages Approach

Authors: Safak Isik, Ozalp Vayvay

Abstract:

Strategic partnerships with suppliers play a vital role for the long-term value-based supply chain. This strategic collaboration keeps still being one of the top priority of many business organizations in order to create more additional value; benefiting mainly from supplier’s specialization, capacity and innovative power, securing supply and better managing costs and quality. However, many organizations encounter difficulties in initiating, developing and managing those partnerships and many attempts result in failures. One of the reasons for such failure is the incompatibility of members of this partnership or in other words wrong supplier selection which emphasize the significance of the selection process since it is the beginning stage. An effective selection process of strategic suppliers is critical to the success of the partnership. Although there are several research studies to select the suppliers in literature, only a few of them is related to strategic supplier selection for long-term partnership. The purpose of this study is to propose a conceptual model for the selection of strategic partnership suppliers. A two-stage approach has been used in proposed model incorporating first segmentation and second selection. In the first stage; considering the fact that not all suppliers are strategically equal and instead of a long list of potential suppliers, Kraljic’s purchasing portfolio matrix can be used for segmentation. This supplier segmentation is the process of categorizing suppliers based on a defined set of criteria in order to identify types of suppliers and determine potential suppliers for strategic partnership. In the second stage, from a pool of potential suppliers defined at first phase, a comprehensive evaluation and selection can be performed to finally define strategic suppliers considering various tangible and intangible criteria. Since a long-term relationship with strategic suppliers is anticipated, criteria should consider both current and future status of the supplier. Based on an extensive literature review; strategical, operational and organizational criteria have been determined and elaborated. The result of the selection can also be used to determine suppliers who are not ready for a partnership but to be developed for strategic partnership. Since the model is based on multiple criteria for both stages, it provides a framework for further utilization of Multi-Criteria Decision Making (MCDM) techniques. The model may also be applied to a wide range of industries and involve managerial features in business organizations.

Keywords: Kraljic’s matrix, purchasing portfolio, strategic supplier selection, supplier collaboration, supplier partnership, supplier segmentation

Procedia PDF Downloads 239
744 Development of Peaceful Wellbeing in Executive Practitioners through Mindfulness-Based Practices

Authors: Narumon Jiwattanasuk, Phrakrupalad Pannavoravat, Pataraporn Sirikanchana

Abstract:

Mindfulness has become a perspective addressing positive wellbeing these days. The aims of this paper are to analyze the problems of executive meditation practitioners at the Buddhamahametta Foundation in Thailand and to provide recommendations on the process to develop peaceful wellbeing in executive meditation practitioners by applying the principles of the four foundations of mindfulness. This study is particularly focused on executives because there is not much research focusing on the well-being development of executives, and the researcher recognizes that executives can be an example within their organizations. This would be a significant influence on their employees and their families to be interested in practicing mindfulness. This improvement will then grow from an individual to the surrounding community such as family, workplace, society, and the nation. This would lead to happiness at the national level, which is the expectation of this research. The paper highlights mindfulness practices that can be performed on a daily basis. This study is qualitative research, and there are 10 key participants who are executives from various sectors such as hospitality, healthcare, retail, power energy, and so on. Three mindfulness-based courses were conducted over a period of 8 months, and in-depth interviews were done before the first course as well as at the end of every course. In total, four in-depth interviews were conducted. The information collected from the interviews was analyzed in order to create the process to develop peaceful well-being. Focus group discussions with the mindfulness specialists were conducted to help develop the mindfulness program as well. As a result of this research, it is found that the executives faced the following problems: stress, negative thinking loops, losing temper, seeking acceptance, worry about uncontrollable external factors, unable to control their words, and weight gain. The cultivation of the four foundations of mindfulness can develop peaceful wellbeing. The results showed that after the key informant executives attended the mindfulness courses and practiced mindfulness regularly, they have developed peaceful well-being in all aspects such as physical, psychological, behavioral, and intellectual by applying 12 mindfulness-based activities. The development of wellbeing, in the conclusion of this study, also includes various tools to support the continuing practice, including the handout of guided mindfulness practice, VDO clips about mindfulness practice, the online dhamma channel, and mobile applications to support regular mindfulness-based practices.

Keywords: executive, mindfulness activities, stress, wellbeing

Procedia PDF Downloads 121
743 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule

Authors: Yuanxiaoyue Yang

Abstract:

Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.

Keywords: air pollution, cap-and-trade, emissions trading, environmental justice

Procedia PDF Downloads 152