Search results for: single machine scheduling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7298

Search results for: single machine scheduling

518 Electret: A Solution of Partial Discharge in High Voltage Applications

Authors: Farhina Haque, Chanyeop Park

Abstract:

The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.

Keywords: electrets, high power density, partial discharge, triode corona discharge

Procedia PDF Downloads 180
517 Comparison of the Effect of Heart Rate Variability Biofeedback and Slow Breathing Training on Promoting Autonomic Nervous Function Related Performance

Authors: Yi Jen Wang, Yu Ju Chen

Abstract:

Background: Heart rate variability (HRV) biofeedback can promote autonomic nervous function, sleep quality and reduce psychological stress. In HRV biofeedback training, it is hoped that through the guidance of machine video or audio, the patient can breathe slowly according to his own heart rate changes so that the heart and lungs can achieve resonance, thereby promoting the related effects of autonomic nerve function; while, it is also pointed out that if slow breathing of 6 times per minute can also guide the case to achieve the effect of cardiopulmonary resonance. However, there is no relevant research to explore the comparison of the effectiveness of cardiopulmonary resonance by using video or audio HRV biofeedback training and metronome-guided slow breathing. Purpose: To compare the promotion of autonomic nervous function performance between using HRV biofeedback and slow breathing guided by a metronome. Method: This research is a kind of experimental design with convenient sampling; the cases are randomly divided into the heart rate variability biofeedback training group and the slow breathing training group. The HRV biofeedback training group will conduct HRV biofeedback training in a four-week laboratory and use the home training device for autonomous training; while the slow breathing training group will conduct slow breathing training in the four-week laboratory using the mobile phone APP breathing metronome to guide the slow breathing training, and use the mobile phone APP for autonomous training at home. After two groups were enrolled and four weeks after the intervention, the autonomic nervous function-related performance was repeatedly measured. Using the chi-square test, student’s t-test and other statistical methods to analyze the results, and use p <0.05 as the basis for statistical significance. Results: A total of 27 subjects were included in the analysis. After four weeks of training, the HRV biofeedback training group showed significant improvement in the HRV indexes (SDNN, RMSSD, HF, TP) and sleep quality. Although the stress index also decreased, it did not reach statistical significance; the slow breathing training group was not statistically significant after four weeks of training, only sleep quality improved significantly, while the HRV indexes (SDNN, RMSSD, TP) all increased. Although HF and stress indexes decreased, they were not statistically significant. Comparing the difference between the two groups after training, it was found that the HF index improved significantly and reached statistical significance in the HRV biofeedback training group. Although the sleep quality of the two groups improved, it did not reach that level in a statistically significant difference. Conclusion: HRV biofeedback training is more effective in promoting autonomic nervous function than slow breathing training, but the effects of reducing stress and promoting sleep quality need to be explored after increasing the number of samples. The results of this study can provide a reference for clinical or community health promotion. In the future, it can also be further designed to integrate heart rate variability biological feedback training into the development of AI artificial intelligence wearable devices, which can make it more convenient for people to train independently and get effective feedback in time.

Keywords: autonomic nervous function, HRV biofeedback, heart rate variability, slow breathing

Procedia PDF Downloads 147
516 Antineoplastic Effect of Tridham and Penta Galloyl Glucose in Experimental Mammary Carcinoma Bearing Rats

Authors: Karthick Dharmalingam, Stalin Ramakrishnan, Haseena Banu Hedayathullah Khan, Sachidanandanam Thiruvaiyaru Panchanadham, Shanthi Palanivelu

Abstract:

Background: Breast cancer is arising as the most dreadful cancer affecting women worldwide. Hence, there arises a need to search and test for new drugs. Herbal formulations used in Siddha preparations are proved to be effective against various types of cancer. They also offer advantage through synergistic amplification and diminish any possible adverse effects. Tridham (TD) is a herbal formulation prepared in our laboratory consisting of Terminalia chebula, Elaeocarpus ganitrus and Prosopis cineraria in a definite ratio and has been used for the treatment of mammary carcinoma. Objective: To study the restorative effect of Tridham and penta galloyl glucose (a component of TD) on DMBA induced mammary carcinoma in female Sprague Dawley rats. Materials and Methods: Rats were divided into seven groups of six animals each. Group I (Control) received corn oil. Group II– mammary carcinoma was induced by DMBA dissolved in corn oil single dose orally. Group III and Group IV were induced with DMBA and subsequently treated with Tridham and penta galloyl glucose, respectively for 48 days. Group V was treated with DMBA and subsequently with a standard drug, cyclophosphamide. Group VI and Group VII were given Tridham and penta galloyl glucose alone, respectively for 48 days. After the experimental period, the animals were sacrificed by cervical decapitation. The mammary gland tissue was excised and levels of antioxidants were determined by biochemical assay. p53 and PCNA expression were accessed using immunohistochemistry. Nrf-2, Cox-2 and caspase-3 protein expression were studied by Western Blotting analysis. p21, Bcl-2, Bax, Bad and caspase-8 gene expression were studied by RT-PCR. Results: Histopathological studies confirmed induction of mammary carcinoma in DMBA induced rats and treatment with TD and PGG resulted in regression of tumour. The levels of enzymic and non-enzymic antioxidants were decreased in DMBA induced rats when compared to control rats. The levels of cell cycle inhibitory markers and apoptotic markers were decreased in DMBA induced rats when compared to control rats. These parameters were restored to near normal levels on treatment with Tridham and PGG. Conclusion: The results of the present study indicate the antineoplastic effect of Tridham and PGG are exerted through the modulation of antioxidant status and expression of cell cycle regulatory markers as well as apoptotic markers. Acknowledgment: Financial assistance provided in the form of ICMR-SRF by Indian Council of Medical Research (ICMR), India is gratefully acknowledged here.

Keywords: antioxidants, Mammary carcinoma, pentaGalloyl glucose, Tridham

Procedia PDF Downloads 254
515 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer

Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz

Abstract:

Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.

Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions

Procedia PDF Downloads 122
514 Development of High-Efficiency Down-Conversion Fluoride Phosphors to Increase the Efficiency of Solar Panels

Authors: S. V. Kuznetsov, M. N. Mayakova, V. Yu. Proydakova, V. V. Pavlov, A. S. Nizamutdinov, O. A. Morozov, V. V. Voronov, P. P. Fedorov

Abstract:

Increase in the share of electricity received by conversion of solar energy results in the reduction of the industrial impact on the environment from the use of the hydrocarbon energy sources. One way to increase said share is to improve the efficiency of solar energy conversion in silicon-based solar panels. Such efficiency increase can be achieved by transferring energy from sunlight-insensitive areas of work of silicon solar panels to the area of their photoresistivity. To achieve this goal, a transition to new luminescent materials with the high quantum yield of luminescence is necessary. Improvement in the quantum yield can be achieved by quantum cutting, which allows obtaining a quantum yield of down conversion of more than 150% due to the splitting of high-energy photons of the UV spectral range into lower-energy photons of the visible and near infrared spectral ranges. The goal of present work is to test approach of excitation through sensibilization of 4f-4f fluorescence of Yb3+ by various RE ions absorbing in UV and Vis spectral ranges. One of promising materials for quantum cutting luminophores are fluorides. In our investigation we have developed synthesis of nano- and submicron powders of calcium fluoride and strontium doped with rare-earth elements (Yb: Ce, Yb: Pr, Yb: Eu) of controlled dimensions and shape by co-precipitation from water solution technique. We have used Ca(NO3)2*4H2O, Sr(NO3)2, HF, NH4F as precursors. After initial solutions of nitrates were prepared they have been mixed with fluorine containing solution by dropwise manner. According to XRD data, the synthesis resulted in single phase samples with fluorite structure. By means of SEM measurements, we have confirmed spherical morphology and have determined sizes of particles (50-100 nm after synthesis and 150-300 nm after calcination). Temperature of calcination appeared to be 600°C. We have investigated the spectral-kinetic characteristics of above mentioned compounds. Here the diffuse reflection and laser induced fluorescence spectra of Yb3+ ions excited at around 4f-4f and 4f-5d transitions of Pr3+, Eu3+ and Ce3+ ions in the synthesized powders are reported. The investigation of down conversion luminescence capability of synthesized compounds included measurements of fluorescence decays and quantum yield of 2F5/2-2F7/2 fluorescence of Yb3+ ions as function of Yb3+ and sensitizer contents. An optimal chemical composition of CaF2-YbF3- LnF3 (Ln=Ce, Eu, Pr), SrF2-YbF3-LnF3 (Ln=Ce, Eu, Pr) micro- and nano- powders according to criteria of maximal IR fluorescence yield is proposed. We suppose that investigated materials are prospective in solar panels improvement applications. Work was supported by Russian Science Foundation grant #17-73- 20352.

Keywords: solar cell, fluorides, down-conversion luminescence, maximum quantum yield

Procedia PDF Downloads 245
513 The Effect of Rheological Properties and Spun/Meltblown Fiber Characteristics on “Hotmelt Bleed through” Behavior in High Speed Textile Backsheet Lamination Process

Authors: Kinyas Aydin, Fatih Erguney, Tolga Ceper, Serap Ozay, Ipar N. Uzun, Sebnem Kemaloglu Dogan, Deniz Tunc

Abstract:

In order to meet high growth rates in baby diaper industry worldwide, the high-speed textile backsheet lamination lines have recently been introduced to the market for non-woven/film lamination applications. It is a process where two substrates are bonded to each other via hotmelt adhesive (HMA). Nonwoven (NW) lamination system basically consists of 4 components; polypropylene (PP) nonwoven, polyethylene (PE) film, HMA and applicator system. Each component has a substantial effect on the process efficiency of continuous line and final product properties. However, for a precise subject cover, we will be addressing only the main challenges and possible solutions in this paper. The NW is often produced by spunbond method (SSS or SMS configuration) and has a 10-12 gsm (g/m²) basis weight. The NW rolls can have a width and length up to 2.060 mm and 30.000 linear meters, respectively. The PE film is the 2ⁿᵈ component in TBS lamination, which is usually a 12-14 gsm blown or cast breathable film. HMA is a thermoplastic glue (mostly rubber based) that can be applied in a large range of viscosity ranges. The main HMA application technology in TBS lamination is the slot die application in which HMA is spread on the top of the NW along the whole width at high temperatures in the melt form. Then, the NW is passed over chiller rolls with a certain open time depending on the line speed. HMAs are applied at certain levels in order to provide a proper de-lamination strength in cross and machine directions to the entire structure. Current TBS lamination line speed and width can be as high as 800 m/min and 2100 mm, respectively. They also feature an automated web control tension system for winders and unwinders. In order to run a continuous trouble-free mass production campaign on the fast industrial TBS lines, rheological properties of HMAs and micro-properties of NWs can have adverse effects on the line efficiency and continuity. NW fiber orientation and fineness, as well as spun/melt blown composition fabric micro-level properties, are the significant factors to affect the degree of “HMA bleed through.” As a result of this problem, frequent line stops are observed to clean the glue that is being accumulated on the chiller rolls, which significantly reduces the line efficiency. HMA rheology is also important and to eliminate any bleed through the problem; one should have a good understanding of rheology driven potential complications. So, the applied viscosity/temperature should be optimized in accordance with the line speed, line width, NW characteristics and the required open time for a given HMA formulation. In this study, we will show practical aspects of potential preventative actions to minimize the HMA bleed through the problem, which may stem from both HMA rheological properties and NW spun melt/melt blown fiber characteristics.

Keywords: breathable, hotmelt, nonwoven, textile backsheet lamination, spun/melt blown

Procedia PDF Downloads 331
512 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 133
511 Distribution of Micro Silica Powder at a Ready Mixed Concrete

Authors: Kyong-Ku Yun, Dae-Ae Kim, Kyeo-Re Lee, Kyong Namkung, Seung-Yeon Han

Abstract:

Micro silica is collected as a by-product of the silicon and ferrosilicon alloy production in electric arc furnace using highly pure quartz, wood chips, coke and the like. It consists of about 85% of silicon which has spherical particles with an average particle size of 150 μm. The bulk density of micro silica varies from 150 to 700kg/m^3 and the fineness ranges from 150,000 to 300,000cm^2/g. An amorphous structure with a high silicon oxide content of micro silica induces an active reaction with calcium hydroxide (Ca(OH)₂) generated by the cement hydrate of a large surface area (about 20 m^² / g), and they are also known to form calcium, silicate, hydrate conjugate (C-S-H). Micro silica tends to act as a filler because of the fine particles and the spherical shape. These particles do not get covered by water and they fit well in the space between the relatively rough cement grains which does not freely fluidize concrete. On the contrary, water demand increases since micro silica particles have a tendency to absorb water because of the large surface area. The overall effect of micro silica depends on the amount of micro silica added with other parameters in the water-(cement + micro silica) ratio, and the availability of superplasticizer. In this research, it was studied on cellular sprayed concrete. This method involves a direct re-production of ready mixed concrete into a high performance at a job site. It could reduce the cost of construction by an adding a cellular and a micro silica into a ready mixed concrete truck in a field. Also, micro silica which is difficult with mixing due to high fineness in the field can be added and dispersed in concrete by increasing the fluidity of ready mixed concrete through the surface activity of cellular. Increased air content is converged to a certain level of air content by spraying and it also produces high-performance concrete by remixing of powders in the process of spraying. As it does not use a field mixing equipment the cost of construction decrease and it can be constructed after installing special spray machine in a commercial pump car. Therefore, use of special equipment is minimized, providing economic feasibility through the utilization of existing equipment. This study was carried out to evaluate a highly reliable method of confirming dispersion through a high performance cellular sprayed concrete. A mixture of 25mm coarse aggregate and river sand was applied to the concrete. In addition, by applying silica fume and foam, silica fume dispersion is confirmed in accordance with foam mixing, and the mean and standard deviation is obtained. Then variation coefficient is calculated to finally evaluate the dispersion. Comparison and analysis of before and after spraying were conducted on the experiment variables of 21L, 35L foam for each 7%, 14% silica fume respectively. Taking foam and silica fume as variables, the experiment proceed. Casting a specimen for each variable, a five-day sample is taken from each specimen for EDS test. In this study, it was examined by an experiment materials, plan and mix design, test methods, and equipment, for the evaluation of dispersion in accordance with micro silica and foam.

Keywords: micro silica, distribution, ready mixed concrete, foam

Procedia PDF Downloads 186
510 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 205
509 Nanostructured Pt/MnO2 Catalysts and Their Performance for Oxygen Reduction Reaction in Air Cathode Microbial Fuel Cell

Authors: Maksudur Rahman Khan, Kar Min Chan, Huei Ruey Ong, Chin Kui Cheng, Wasikur Rahman

Abstract:

Microbial fuel cells (MFCs) represent a promising technology for simultaneous bioelectricity generation and wastewater treatment. Catalysts are significant portions of the cost of microbial fuel cell cathodes. Many materials have been tested as aqueous cathodes, but air-cathodes are needed to avoid energy demands for water aeration. The sluggish oxygen reduction reaction (ORR) rate at air cathode necessitates efficient electrocatalyst such as carbon supported platinum catalyst (Pt/C) which is very costly. Manganese oxide (MnO2) was a representative metal oxide which has been studied as a promising alternative electrocatalyst for ORR and has been tested in air-cathode MFCs. However, the single MnO2 has poor electric conductivity and low stability. In the present work, the MnO2 catalyst has been modified by doping Pt nanoparticle. The goal of the work was to improve the performance of the MFC with minimum Pt loading. MnO2 and Pt nanoparticles were prepared by hydrothermal and sol-gel methods, respectively. Wet impregnation method was used to synthesize Pt/MnO2 catalyst. The catalysts were further used as cathode catalysts in air-cathode cubic MFCs, in which anaerobic sludge was inoculated as biocatalysts and palm oil mill effluent (POME) was used as the substrate in the anode chamber. The as-prepared Pt/MnO2 was characterized comprehensively through field emission scanning electron microscope (FESEM), X-Ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and cyclic voltammetry (CV) where its surface morphology, crystallinity, oxidation state and electrochemical activity were examined, respectively. XPS revealed Mn (IV) oxidation state and Pt (0) nanoparticle metal, indicating the presence of MnO2 and Pt. Morphology of Pt/MnO2 observed from FESEM shows that the doping of Pt did not cause change in needle-like shape of MnO2 which provides large contacting surface area. The electrochemical active area of the Pt/MnO2 catalysts has been increased from 276 to 617 m2/g with the increase in Pt loading from 0.2 to 0.8 wt%. The CV results in O2 saturated neutral Na2SO4 solution showed that MnO2 and Pt/MnO2 catalysts could catalyze ORR with different catalytic activities. MFC with Pt/MnO2 (0.4 wt% Pt) as air cathode catalyst generates a maximum power density of 165 mW/m3, which is higher than that of MFC with MnO2 catalyst (95 mW/m3). The open circuit voltage (OCV) of the MFC operated with MnO2 cathode gradually decreased during 14 days of operation, whereas the MFC with Pt/MnO2 cathode remained almost constant throughout the operation suggesting the higher stability of the Pt/MnO2 catalyst. Therefore, Pt/MnO2 with 0.4 wt% Pt successfully demonstrated as an efficient and low cost electrocatalyst for ORR in air cathode MFC with higher electrochemical activity, stability and hence enhanced performance.

Keywords: microbial fuel cell, oxygen reduction reaction, Pt/MnO2, palm oil mill effluent, polarization curve

Procedia PDF Downloads 531
508 Industrial Waste Multi-Metal Ion Exchange

Authors: Thomas S. Abia II

Abstract:

Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.

Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese

Procedia PDF Downloads 118
507 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 46
506 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft

Authors: Saurabh Sharma

Abstract:

Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.

Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete

Procedia PDF Downloads 48
505 High-Resolution Facial Electromyography in Freely Behaving Humans

Authors: Lilah Inzelberg, David Rand, Stanislav Steinberg, Moshe David Pur, Yael Hanein

Abstract:

Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.

Keywords: affective expressions, affective processing, facial EMG, high-resolution electromyography, independent component analysis, wireless electrodes

Procedia PDF Downloads 220
504 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 178
503 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 141
502 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production

Authors: Mohammad Emamjome Kashan, Alan S. Fung

Abstract:

In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.

Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector

Procedia PDF Downloads 90
501 Frequency of Tube Feeding in Aboriginal and Non-aboriginal Head and Neck Cancer Patients and the Impact on Relapse and Survival Outcomes

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancer and treatments are known for their profound effect on nutrition and tube feeding is a common requirement to maintain nutrition. Aim: We aimed to evaluate the frequency of tube feeding in Aboriginal and non-Aboriginal patients, and to examine the relapse and survival outcomes in patients who require enteral tube feeding. Methods: We performed a retrospective cohort analysis of 320 head and neck cancer patients from a single centre in Western Australia, identifying 80 Aboriginal patients and 240 non-Aboriginal patients matched on a 1:3 ratio by site, histology, rurality, and age. Data collected included patient demographics, tumour features, treatment details, and cancer and survival outcomes. Results: Aboriginal and non-Aboriginal patients required feeding tubes at similar rates (42.5% vs 46.2% respectively), however Aboriginal patients were far more likely to fail to return to oral nutrition, with 26.3% requiring long-term tube feeding versus only 15% of non-Aboriginal patients. In the overall study population, 27.5% required short-term tube feeding, 17.8% required long-term enteral tube nutrition, and 45.3% of patients did not have a feeding tube at any point. Relapse was more common in patients who required tube feeding, with relapses in 42.1% of the patients requiring long-term tube feeding, 31.8% in those requiring a short-term tube, versus 18.9% in the ‘no tube’ group. Survival outcomes for patients who required a long-term tube were also significantly poorer when compared to patients who only required a short-term tube, or not at all. Long-term tube-requiring patients were half as likely to survive (29.8%) compared to patients requiring a short-term tube (62.5%) or no tube at all (63.5%). Patients requiring a long-term tube were twice as likely to die with active disease (59.6%) as patients with no tube (28%), or a short term tube (33%). This may suggest an increased relapse risk in patients who require long-term feeding, due to consequences of malnutrition on cancer and treatment outcomes, although may simply reflect that patients with recurrent disease were more likely to have longer-term swallowing dysfunction due to recurrent disease and salvage treatments. Interestingly long-term tube patients were also more likely to die with no active disease (10.5%) (compared with short-term tube requiring patients (4.6%), or patients with no tube (8%)), which is likely reflective of the increased mortality associated with long-term aspiration and malnutrition issues. Conclusions: Requirement for tube feeding was associated with a higher rate of cancer relapse, and in particular, long-term tube feeding was associated with a higher likelihood of dying from head and neck cancer, but also a higher risk of dying from other causes without cancer relapse. This data reflects the complex effect of head and neck cancer and its treatments on swallowing and nutrition, and ultimately, the effects of malnutrition, swallowing dysfunction, and aspiration on overall cancer and survival outcomes. Tube feeding was seen at similar rates in Aboriginal and non-Aboriginal patient, however failure to return to oral intake with a requirement for a long-term feeding tube was seen far more commonly in the Aboriginal population.

Keywords: head and neck cancer, enteral tube feeding, malnutrition, survival, relapse, aboriginal patients

Procedia PDF Downloads 69
500 Productivity of Grain Sorghum-Cowpea Intercropping System: Climate-Smart Approach

Authors: Mogale T. E., Ayisi K. K., Munjonji L., Kifle Y. G.

Abstract:

Grain sorghum and cowpea are important staple crops in many areas of South Africa, particularly the Limpopo Province. The two crops are produced under a wide range of unsustainable conventional methods, which reduces productivity in the long run. Climate-smart traditional methods such as intercropping can be adopted to ensure sustainable production of these important two crops in the province. A no-tillage field experiment was laid out in a randomised complete block design (RCBD) with four replications over two seasons in two distinct agro-ecological zones, Syferkuil and Ofcolacoin, the province to assess the productivity of sorghum-cowpea intercropped under two cowpea densities.LCi Ultra compact photosynthesis machine was used to collect photosynthetic rate data biweekly between 11h00 and 13h00 until physiological maturity. Biomass and grain yield of the component crops in binary and sole cultures were determined at harvest maturity from middle rows of 2.7 m2 area. The biomass was oven dried in the laboratory at 65oC till constant weight. To obtain grain yield, harvested sorghum heads and cowpea pods were threshed, cleaned, and weighed. Harvest index (HI) and land equivalent ratio (LER) of the two crops were calculated to assess intercrop productivity relative to sole cultures. Data was analysed using the statistical analysis software system (SAS) 9.4 version, followed by mean separation using the least significant difference method. The photosyntheticrate of sorghum-cowpea intercrop was influenced by cowpea density and sorghum cultivar. Photosynthetic rate under low density was higher compared to high density, but this was dependent on the growing conditions. Dry biomass accumulation, grain yield, and harvest index differed among the sorghum cultivars and cowpea in both binary and sole cultures at the two test locations during the 2018/19 and 2020/21 growing seasons. Cowpea grain and dry biomass yields werein excess of 60% under high density compared to low density in both binary and sole cultures. The results revealed that grain yield accumulation of sorghum cultivars was influenced by the density of the companion cowpea crop as well as the production season. For instant, at Syferkuil, Enforcer and Ns5511 accumulated high yield under low density, whereas, at Ofcolaco, the higher yield was recorded under high density. Generally, under low cowpea density, cultivar Enforcer produced relatively higher grain yield whereas, under higher density, Titan yield was superior. The partial and total LER varied with growing season and the treatments studied. The total LERs exceeded 1.0 at the two locations across seasons, ranging from 1.3 to 1.8. From the results, it can be concluded that resources were used more efficiently in sorghum-cowpea intercrop at both Syferkuil and Ofcolaco. Furthermore, intercropping system improved photosynthetic rate, grain yield, and dry matter accumulation of sorghum and cowpea depending on growing conditions and density of cowpea. Hence, the sorghum-cowpea intercropping system can be adopted as a climate-smart practice for sustainable production in the Limpopo province.

Keywords: cowpea, climate-smart, grain sorghum, intercropping

Procedia PDF Downloads 178
499 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 227
498 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 155
497 A Paradigm Shift in Patent Protection-Protecting Methods of Doing Business: Implications for Economic Development in Africa

Authors: Odirachukwu S. Mwim, Tana Pistorius

Abstract:

Since the early 1990s political and economic pressures have been mounted on policy and law makers to increase patent protection by raising the protection standards. The perception of the relation between patent protection and development, particularly economic development, has evolved significantly in the past few years. Debate on patent protection in the international arena has been significantly influenced by the perception that there is a strong link between patent protection and economic development. The level of patent protection determines the extent of development that can be achieved. Recently there has been a paradigm shift with a lot of emphasis on extending patent protection to method of doing business generally referred to as Business Method Patenting (BMP). The general perception among international organizations and the private sectors also indicates that there is a strong correlation between BMP protection and economic growth. There are two diametrically opposing views as regards the relation between Intellectual Property (IP) protection and development and innovation. One school of thought promotes the view that IP protection improves economic development through stimulation of innovation and creativity. The other school advances the view that IP protection is unnecessary for stimulation of innovation and creativity and is in fact a hindrance to open access to resources and information required for innovative and creative modalities. Therefore, different theories and policies attach different levels of protection to BMP which have specific implications for economic growth. This study examines the impact of BMP protection on development by focusing on the challenges confronting economic growth in African communities as a result of the new paradigm in patent law. (Africa is used as a single unit in this study but this should not be construed as African homogeneity. Rather, the views advanced in this study are used to address the common challenges facing many communities in Africa). The study reviews (from the point of views of legal philosophers, policy makers and decisions of competent courts) the relevant literature, patent legislation particularly the International Treaty, policies and legal judgments. Findings from this study suggest that over and above the various criticisms levelled against the extreme liberal approach to the recognition of business methods as patentable subject matter, there are other specific implications that are associated with such approach. The most critical implication of extending patent protection to business methods is the locking-up of knowledge which may hamper human development in general and economic development in particular. Locking up knowledge necessary for economic advancement and competitiveness may have a negative effect on economic growth by promoting economic exclusion, particularly in African communities. This study suggests that knowledge of BMP within the African context and the extent of protection linked to it is crucial in achieving a sustainable economic growth in Africa. It also suggests that a balance is struck between the two diametrically opposing views.

Keywords: Africa, business method patenting, economic growth, intellectual property, patent protection

Procedia PDF Downloads 96
496 Additive Manufacturing with Ceramic Filler

Authors: Irsa Wolfram, Boruch Lorenz

Abstract:

Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.

Keywords: additive manufacturing, ceramic composites, complex filament, industrial application

Procedia PDF Downloads 79
495 Synergistic Effect of Chondroinductive Growth Factors and Synovium-Derived Mesenchymal Stem Cells on Regeneration of Cartilage Defects in Rabbits

Authors: M. Karzhauov, А. Mukhambetova, M. Sarsenova, E. Raimagambetov, V. Ogay

Abstract:

Regeneration of injured articular cartilage remains one of the most difficult and unsolved problems in traumatology and orthopedics. Currently, for the treatment of cartilage defects surgical techniques for stimulation of the regeneration of cartilage in damaged joints such as multiple microperforation, mosaic chondroplasty, abrasion and microfractures is used. However, as shown by clinical practice, they can not provide a full and sustainable recovery of articular hyaline cartilage. In this regard, the current high hopes in the regeneration of cartilage defects reasonably are associated with the use of tissue engineering approaches to restore the structural and functional characteristics of damaged joints using stem cells, growth factors and biopolymers or scaffolds. The purpose of the present study was to investigate the effects of chondroinductive growth factors and synovium-derived mesenchymal stem cells (SD-MSCs) on the regeneration of cartilage defects in rabbits. SD-MSCs were isolated from the synovium membrane of Flemish giant rabbits, and expanded in complete culture medium α-MEM. Rabbit SD-MSCs were characterized by CFU-assay and by their ability to differentiate into osteoblasts, chondrocytes and adipocytes. The effects of growth factors (TGF-β1, BMP-2, BMP-4 and IGF-I) on MSC chondrogenesis were examined in micromass pellet cultures using histological and biochemical analysis. Articular cartilage defect (4mm in diameter) in the intercondylar groove of the patellofemoral joint was performed with a kit for the mosaic chondroplasty. The defect was made until subchondral bone plate. Delivery of SD-MSCs and growth factors was conducted in combination with hyaloronic acid (HA). SD-MSCs, growth factors and control groups were compared macroscopically and histologically at 10, 30, 60 and 90 days aftrer intra-articular injection. Our in vitro comparative study revealed that TGF-β1 and BMP-4 are key chondroinductive factors for both the growth and chondrogenesis of SD-MSCs. The highest effect on MSC chondrogenesis was observed with the synergistic interaction of TGF-β1 and BMP-4. In addition, biochemical analysis of the chondrogenic micromass pellets also revealed that the levels of glycosaminoglycans and DNA after combined treatment with TGF-β1 and BMP-4 was significantly higher in comparison to individual application of these factors. In vivo study showed that for complete regeneration of cartilage defects with intra-articular injection of SD-MSCs with HA takes time 90 days. However, single injection of SD-MSCs in combiantion with TGF-β1, BMP-4 and HA significantly promoted regeneration rate of the cartilage defects in rabbits. In this case, complete regeneration of cartilage defects was observed in 30 days after intra-articular injection. Thus, our in vitro and in vivo study demonstrated that combined application of rabbit SD-MSC with chondroinductive growth factors and HA results in strong synergistic effect on the chondrogenesis significantly enhancing regeneration of the damaged cartilage.

Keywords: Mesenchymal stem cells, synovium, chondroinductive factors, TGF-β1, BMP-2, BMP-4, IGF-I

Procedia PDF Downloads 277
494 A Culture-Contrastive Analysis Of The Communication Between Discourse Participants In European Editorials

Authors: Melanie Kerschner

Abstract:

Language is our main means of social interaction. News journalism, especially opinion discourse, holds a powerful position in this context. Editorials can be regarded as encounters of different, partially contradictory relationships between discourse participants constructed through the editorial voice. Their primary goal is to shape public opinion by commenting on events already addressed by other journalistic genres in the given newspaper. In doing so, the author tries to establish a consensus over the negotiated matter (i.e. the news event) with the reader. At the same time, he/she claims authority over the “correct” description and evaluation of an event. Yet, how can the relationship and the interaction between the discourse participants, i.e. the journalist, the reader and the news actors represented in the editorial, be best visualized and studied from a cross-cultural perspective? The present research project attempts to give insights into the role of (media) culture in British, Italian and German editorials. For this purpose the presenter will propose a basic framework: the so called “pyramid of discourse participants”, comprising the author, the reader, two types of news actors and the semantic macro-structure (as meta-level of analysis). Based on this framework, the following questions will be addressed: • Which strategies does the author employ to persuade the reader and to prompt him to give his opinion (in the comment section)? • In which ways (and with which linguistic tools) is editorial opinion expressed? • Does the author use adjectives, adverbials and modal verbs to evaluate news actors, their actions and the current state of affairs or does he/she prefer nominal labels? • Which influence do language choice and the related media culture have on the representation of news events in editorials? • In how far does the social context of a given media culture influence the amount of criticism and the way it is mediated so that it is still culturally-acceptable? The following culture-contrastive study shall examine 45 editorials (i.e. 15 per media culture) from six national quality papers that are similar in distribution, importance and the kind of envisaged readership to make valuable conclusions about culturally-motivated similarities and differences in the coverage and assessment of news events. The thematic orientation of the editorials will be the NSA scandal and the reactions of various countries, as this topic was and still is relevant to each of the three media cultures. Starting out from the “pyramid of discourse participants” as underlying framework, eight different criteria will be assigned to the individual discourse participants in the micro-analysis of the editorials. For the purpose of illustration, a single criterion, referring to the salience of authorial opinion, will be selected to demonstrate how the pyramid of discourse participants can be applied as a basis for empirical analysis. Extracts from the corpus shall furthermore enhance the understanding.

Keywords: Micro-analysis of editorials, culture-contrastive research, media culture, interaction between discourse participants, evaluation

Procedia PDF Downloads 482
493 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 135
492 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India

Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar

Abstract:

Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.

Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth

Procedia PDF Downloads 240
491 Life-Cycle Assessment of Residential Buildings: Addressing the Influence of Commuting

Authors: J. Bastos, P. Marques, S. Batterman, F. Freire

Abstract:

Due to demands of a growing urban population, it is crucial to manage urban development and its associated environmental impacts. While most of the environmental analyses have addressed buildings and transportation separately, both the design and location of a building affect environmental performance and focusing on one or the other can shift impacts and overlook improvement opportunities for more sustainable urban development. Recently, several life-cycle (LC) studies of residential buildings have integrated user transportation, focusing exclusively on primary energy demand and/or greenhouse gas emissions. Additionally, most papers considered only private transportation (mainly car). Although it is likely to have the largest share both in terms of use and associated impacts, exploring the variability associated with mode choice is relevant for comprehensive assessments and, eventually, for supporting decision-makers. This paper presents a life-cycle assessment (LCA) of a residential building in Lisbon (Portugal), addressing building construction, use and user transportation (commuting with private and public transportation). Five environmental indicators or categories are considered: (i) non-renewable primary energy (NRE), (ii) greenhouse gas intensity (GHG), (iii) eutrophication (EUT), (iv) acidification (ACID), and (v) ozone layer depletion (OLD). In a first stage, the analysis addresses the overall life-cycle considering the statistical model mix for commuting in the residence location. Then, a comparative analysis compares different available transportation modes to address the influence mode choice variability has on the results. The results highlight the large contribution of transportation to the overall LC results in all categories. NRE and GHG show strong correlation, as the three LC phases contribute with similar shares to both of them: building construction accounts for 6-9%, building use for 44-45%, and user transportation for 48% of the overall results. However, for other impact categories there is a large variation in the relative contribution of each phase. Transport is the most significant phase in OLD (60%); however, in EUT and ACID building use has the largest contribution to the overall LC (55% and 64%, respectively). In these categories, transportation accounts for 31-38%. A comparative analysis was also performed for four alternative transport modes for the household commuting: car, bus, motorcycle, and company/school collective transport. The car has the largest results in all impact categories. When compared to the overall LC with commuting by car, mode choice accounts for a variability of about 35% in NRE, GHG and OLD (the categories where transportation accounted for the largest share of the LC), 24% in EUT and 16% in ACID. NRE and GHG show a strong correlation because all modes have internal combustion engines. The second largest results for NRE, GHG and OLD are associated with commuting by motorcycle; however, for ACID and EUT this mode has better performance than bus and company/school transport. No single transportation mode performed best in all impact categories. Integrated assessments of buildings are needed to avoid shifts of impacts between life-cycle phases and environmental categories, and ultimately to support decision-makers.

Keywords: environmental impacts, LCA, Lisbon, transport

Procedia PDF Downloads 333
490 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test

Authors: Arvind Salem

Abstract:

Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.

Keywords: commissions, elections, gerrymandering, redistricting

Procedia PDF Downloads 43
489 Arthroscopic Superior Capsular Reconstruction Using the Long Head of the Biceps Tendon (LHBT)

Authors: Ho Sy Nam, Tang Ha Nam Anh

Abstract:

Background: Rotator cuff tears are a common problem in the aging population. The prevalence of massive rotator cuff tears varies in some studies from 10% to 40%. Of irreparable rotator cuff tears (IRCTs), which are mostly associated with massive tear size, 79% are estimated to have recurrent tears after surgical repair. Recent studies have shown that superior capsule reconstruction (SCR) in massive rotator cuff tears can be an efficient technique with optimistic clinical scores and preservation of stable glenohumeral stability. Superior capsule reconstruction techniques most commonly use either fascia lata autograft or dermal allograft, both of which have their own benefits and drawbacks (such as the potential for donor site issues, allergic reactions, and high cost). We propose a simple technique for superior capsule reconstruction that involves using the long head of the biceps tendon as a local autograft; therefore, the comorbidities related to graft harvesting are eliminated. The long head of the biceps tendon proximal portion is relocated to the footprint and secured as the SCR, serving to both stabilize the glenohumeral joint and maintain vascular supply to aid healing. Objective: The purpose of this study is to assess the clinical outcomes of patients with large to massive RCTs treated by SCR using LHBT. Materials and methods: A study was performed of consecutive patients with large to massive RCTs who were treated by SCR using LHBT between January 2022 and December 2022. We use one double-loaded suture anchor to secure the long head of the biceps to the middle of the footprint. Two more anchors are used to repair the rotator cuff using a single-row technique, which is placed anteriorly and posteriorly on the lateral side of the previously transposed LHBT. Results: The 3 men and 5 women had an average age of 61.25 years (range 48 to 76 years) at the time of surgery. The average follow-up was 8.2 months (6 to 10 months) after surgery. The average preoperative ASES was 45.8, and the average postoperative ASES was 85.83. The average postoperative UCLA score was 29.12. VAS score was improved from 5.9 to 1.12. The mean preoperative ROM of forward flexion and external rotation of the shoulder was 720 ± 160 and 280 ± 80, respectively. The mean postoperative ROM of forward flexion and external rotation were 1310 ± 220 and 630 ± 60, respectively. There were no cases of progression of osteoarthritis or rotator cuff muscle atrophy. Conclusion: SCR using LHBT is considered a treatment option for patients with large or massive RC tears. It can restore superior glenohumeral stability and function of the shoulder joint and can be an effective procedure for selected patients, helping to avoid progression to cuff tear arthropathy.

Keywords: superior capsule reconstruction, large or massive rotator cuff tears, the long head of the biceps, stabilize the glenohumeral joint

Procedia PDF Downloads 56