Search results for: flight control systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18623

Search results for: flight control systems

773 Addressing Supply Chain Data Risk with Data Security Assurance

Authors: Anna Fowler

Abstract:

When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.

Keywords: security by design, data security architecture, cybersecurity framework, data security assurance

Procedia PDF Downloads 86
772 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces

Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang

Abstract:

Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.

Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide

Procedia PDF Downloads 432
771 Effect of Antimony on Microorganisms in Aerobic and Anaerobic Environments

Authors: Barrera C. Monserrat, Sierra-Alvarez Reyes, Pat-Espadas Aurora, Moreno Andrade Ivan

Abstract:

Antimony is a toxic and carcinogenic metalloid considered a pollutant of priority interest by the United States Environmental Protection Agency. It is present in the environment in two oxidation states: antimonite (Sb (III)) and antimony (Sb (V)). Sb (III) is toxic to several aquatic organisms, but the potential inhibitory effect of Sb species for microorganisms has not been extensively evaluated. The fate and possible toxic impact of antimony on aerobic and anaerobic wastewater treatment systems are unknown. For this reason, the objective of this study was to evaluate the microbial toxicity of Sb (V) and Sb (III) in aerobic and anaerobic environments. Sb(V) and Sb(III) were used as potassium hexahydroxoantimonate (V) and potassium antimony tartrate, respectively (Sigma-Aldrich). The toxic effect of both Sb species in anaerobic environments was evaluated on methanogenic activity and the inhibition of hydrogen production of microorganisms from a wastewater treatment bioreactor. For the methanogenic activity, batch experiments were carried out in 160 mL serological bottles; each bottle contained basal mineral medium (100 mL), inoculum (1.5 g of VSS/L), acetate (2.56 g/L) as substrate, and variable concentrations of Sb (V) or Sb (III). Duplicate bioassays were incubated at 30 ± 2°C on an orbital shaker (105 rpm) in the dark. Methane production was monitored by gas chromatography. The hydrogen production inhibition tests were carried out in glass bottles with a working volume of 0.36 L. Glucose (50 g/L) was used as a substrate, pretreated inoculum (5 g VSS/L), mineral medium and varying concentrations of the two species of antimony. The bottles were kept under stirring and at a temperature of 35°C in an AMPTSII device that recorded hydrogen production. The toxicity of Sb on aerobic microorganisms (from a wastewater activated sludge treatment plant) was tested with a Microtox standardized toxicity test and respirometry. Results showed that Sb (III) is more toxic than Sb (V) for methanogenic microorganisms. Sb (V) caused a 50% decrease in methanogenic activity at 250 mg/L. In contrast, exposure to Sb (III) resulted in a 50% inhibition at a concentration of only 11 mg/L, and an almost complete inhibition (95%) at 25 mg/L. For hydrogen-producing microorganisms, Sb (III) and Sb (V) inhibited 50% of this production with 12.6 mg/L and 87.7 mg/L, respectively. The results for aerobic environments showed that 500 mg/L of Sb (V) do not inhibit the Allivibrio fischeri (Microtox) activity or specific oxygen uptake rate of activated sludge. In the case of Sb (III), this caused a loss of 50% of the respiration of the microorganisms at concentrations below 40 mg/L. The results obtained indicate that the toxicity of the antimony will depend on the speciation of this metalloid and that Sb (III) has a significantly higher inhibitory potential compared to Sb (V). It was shown that anaerobic microorganisms can reduce Sb (V) to Sb (III). Acknowledgments: This work was funded in part by grants from the UA-CONACYT Binational Consortium for the Regional Scientific Development and Innovation (CAZMEX), the National Institute of Health (NIH ES- 04940), and PAPIIT-DGAPA-UNAM (IN105220).

Keywords: aerobic inhibition, antimony reduction, hydrogen inhibition, methanogenic toxicity

Procedia PDF Downloads 161
770 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 162
769 The Impact of Housing Design on the Health and Well-Being of Populations: A Case-Study of Middle-Class Families in the Metropolitan Region of Port-Au-Prince, Haiti

Authors: A. L. Verret, N. Prince, Y. Jerome, A. Bras

Abstract:

The effects of housing design on the health and well-being of populations are quite intangible. In fact, healthy housing parameters are generally difficult to establish scientifically. It is often unclear the direction of a cause-and-effect relationship between health variables and housing. However, the lack of clear and definite measurements does not entail the absence of relationship between housing, health, and well-being. Research has thus been conducted. It has mostly aimed the physical rather than the psychological or social well-being of a population, given the difficulties to establish cause-effect relationships because of the subjectivity of the psychological symptoms and of the challenge in determining the influence of other factors. That said, a strong relationship has been exposed between light and physiology. Both the nervous and endocrine systems, amongst others, are affected by different wavelengths of natural light within a building. Daylight in the workplace is indeed associated to decreased absenteeism, errors and product defects, fatigue, eyestrain, increased productivity and positive attitude. Similar associations can also be made to residential housing. Lower levels of sunlight within the home have been proven to result in impaired cognition in depressed participants of a cross-sectional case study. Moreover, minimum space (area and volume) has been linked to healthy housing and quality of life, resulting in norms and regulations for such parameters for home constructions. As a matter of fact, it is estimated that people spend the two-thirds of their lives within the home and its immediate environment. Therefore, it is possible to deduct that the health and well-being of the occupants are potentially at risk in an unhealthy housing situation. While the impact of architecture on health and well-being is acknowledged and considered somewhat crucial in various countries of the north and the south, this issue is barely raised in Haiti. In fact, little importance is given to architecture for many reasons (lack of information, lack of means, societal reflex, poverty…). However, the middle-class is known for its residential strategies and trajectories in search of better-quality homes and environments. For this reason, it would be pertinent to use this group and its strategies and trajectories to isolate the impact of housing design on the overall health and well-being. This research aims to analyze the impact of housing architecture on the health and well-being of middle-class families in the metropolitan region of Port-au-Prince. It is a case study which uses semi-structured interviews and observations as research methods. Although at an early stage, this research anticipates that homes affect their occupants both psychologically and physiologically, and consequently, public policies and the population should take into account the architectural design in the planning and construction of housing and, furthermore, cities.

Keywords: architectural design, health and well-being, middle-class housing, Port-au-Prince, Haiti

Procedia PDF Downloads 136
768 Atypical Retinoid ST1926 Nanoparticle Formulation Development and Therapeutic Potential in Colorectal Cancer

Authors: Sara Assi, Berthe Hayar, Claudio Pisano, Nadine Darwiche, Walid Saad

Abstract:

Nanomedicine, the application of nanotechnology to medicine, is an emerging discipline that has gained significant attention in recent years. Current breakthroughs in nanomedicine have paved the way to develop effective drug delivery systems that can be used to target cancer. The use of nanotechnology provides effective drug delivery, enhanced stability, bioavailability, and permeability, thereby minimizing drug dosage and toxicity. As such, the use of nanoparticle (NP) formulations in drug delivery has been applied in various cancer models and have shown to improve the ability of drugs to reach specific targeted sites in a controlled manner. Cancer is one of the major causes of death worldwide; in particular, colorectal cancer (CRC) is the third most common type of cancer diagnosed amongst men and women and the second leading cause of cancer related deaths, highlighting the need for novel therapies. Retinoids, consisting of natural and synthetic derivatives, are a class of chemical compounds that have shown promise in preclinical and clinical cancer settings. However, retinoids are limited by their toxicity and resistance to treatment. To overcome this resistance, various synthetic retinoids have been developed, including the adamantyl retinoid ST1926, which is a potent anti-cancer agent. However, due to its limited bioavailability, the development of ST1926 has been restricted in phase I clinical trials. We have previously investigated the preclinical efficacy of ST1926 in CRC models. ST1926 displayed potent inhibitory and apoptotic effects in CRC cell lines by inducing early DNA damage and apoptosis. ST1926 significantly reduced the tumor doubling time and tumor burden in a xenograft CRC model. Therefore, we developed ST1926-NPs and assessed their efficacy in CRC models. ST1926-NPs were produced using Flash NanoPrecipitation with the amphiphilic diblock copolymer polystyrene-b-ethylene oxide and cholesterol as a co-stabilizer. ST1926 was formulated into NPs with a drug to polymer mass ratio of 1:2, providing a stable formulation for one week. The contin ST1926-NP diameter was 100 nm, with a polydispersity index of 0.245. Using the MTT cell viability assay, ST1926-NP exhibited potent anti-growth activities as naked ST1926 in HCT116 cells, at pharmacologically achievable concentrations. Future studies will be performed to study the anti-tumor activities and mechanism of action of ST1926-NPs in a xenograft mouse model and to detect the compound and its glucuroconjugated form in the plasma of mice. Ultimately, our studies will support the use of ST1926-NP formulations in enhancing the stability and bioavailability of ST1926 in CRC.

Keywords: nanoparticles, drug delivery, colorectal cancer, retinoids

Procedia PDF Downloads 99
767 Ultrafiltration Process Intensification for Municipal Wastewater Reuse: Water Quality, Optimization of Operating Conditions and Fouling Management

Authors: J. Yang, M. Monnot, T. Eljaddi, L. Simonian, L. Ercolei, P. Moulin

Abstract:

The application of membrane technology to wastewater treatment has expanded rapidly under increasing stringent legislation and environmental protection requirements. At the same time, the water resource is becoming precious, and water reuse has gained popularity. Particularly, ultrafiltration (UF) is a very promising technology for water reuse as it can retain organic matters, suspended solids, colloids, and microorganisms. Nevertheless, few studies dealing with operating optimization of UF as a tertiary treatment for water reuse on a semi-industrial scale appear in the literature. Therefore, this study aims to explore the permeate water quality and to optimize operating parameters (maximizing productivity and minimizing irreversible fouling) through the operation of a UF pilot plant under real conditions. The fully automatic semi-industrial UF pilot plant with periodic classic backwashes (CB) and air backwashes (AB) was set up to filtrate the secondary effluent of an urban wastewater treatment plant (WWTP) in France. In this plant, the secondary treatment consists of a conventional activated sludge process followed by a sedimentation tank. The UF process was thus defined as a tertiary treatment and was operated under constant flux. It is important to note that a combination of CB and chlorinated AB was used for better fouling management. The 200 kDa hollow fiber membrane was used in the UF module, with an initial permeability (for WWTP outlet water) of 600 L·m-2·h⁻¹·bar⁻¹ and a total filtration surface of 9 m². Fifteen filtration conditions with different fluxes, filtration times, and air backwash frequencies were operated for more than 40 hours of each to observe their hydraulic filtration performances. Through comparison, the best sustainable condition was flux at 60 L·h⁻¹·m⁻², filtration time at 60 min, and backwash frequency of 1 AB every 3 CBs. The optimized condition stands out from the others with > 92% water recovery rates, better irreversible fouling control, stable permeability variation, efficient backwash reversibility (80% for CB and 150% for AB), and no chemical washing occurrence in 40h’s filtration. For all tested conditions, the permeate water quality met the water reuse guidelines of the World Health Organization (WHO), French standards, and the regulation of the European Parliament adopted in May 2020, setting minimum requirements for water reuse in agriculture. In permeate: the total suspended solids, biochemical oxygen demand, and turbidity were decreased to < 2 mg·L-1, ≤ 10 mg·L⁻¹, < 0.5 NTU respectively; the Escherichia coli and Enterococci were > 5 log removal reduction, the other required microorganisms’ analysis were below the detection limits. Additionally, because of the COVID-19 pandemic, coronavirus SARS-CoV-2 was measured in raw wastewater of WWTP, UF feed, and UF permeate in November 2020. As a result, the raw wastewater was tested positive above the detection limit but below the quantification limit. Interestingly, the UF feed and UF permeate were tested negative to SARS-CoV-2 by these PCR assays. In summary, this work confirms the great interest in UF as intensified tertiary treatment for water reuse and gives operational indications for future industrial-scale production of reclaimed water.

Keywords: semi-industrial UF pilot plant, water reuse, fouling management, coronavirus

Procedia PDF Downloads 112
766 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution

Authors: Peter G. Hollis, Kim G. Clarke

Abstract:

Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.

Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag

Procedia PDF Downloads 265
765 Possibility of Membrane Filtration to Treatment of Effluent from Digestate

Authors: Marcin Debowski, Marcin Zielinski, Magdalena Zielinska, Paulina Rusanowska

Abstract:

The problem with digestate management is one of the most important factors influencing on the development and operation of biogas plant. Turbidity and bacterial contamination negatively affect the growth of algae, which can limit the use of the effluent in the production of algae biomass on a large scale. These problems can be overcome by cultivating of algae species resistant to environmental factors, such as Chlorella sp., Scenedesmus sp., or reducing load of organic compounds to prevent bacterial contamination. The effluent requires dilution and/or purification. One of the methods of effluent treatment is the use of a membrane technology such as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO), depending on the membrane pore size and the cut off point. Membranes are a physical barrier to solids and particles larger than the size of the pores. MF membranes have the largest pores and are used to remove turbidity, suspensions, bacteria and some viruses. UF membranes remove also color, odor and organic compounds with high molecular weight. In treatment of wastewater or other waste streams, MF and UF can provide a sufficient degree of purification. NF membranes are used to remove natural organic matter from waters, water disinfection products and sulfates. RO membranes are applied to remove monovalent ions such as Na⁺ or K⁺. The effluent was used in UF for medium to cultivation of two microalgae: Chlorella sp. and Phaeodactylum tricornutum. Growth rates of Chlorella sp. and P. tricornutum were similar: 0.216 d⁻¹ and 0.200 d⁻¹ (Chlorella sp.); 0.128 d⁻¹ and 0.126 d⁻¹ (P. tricornutum), on synthetic medium and permeate from UF, respectively. The final biomass composition was also similar, regardless of the medium. Removal of nitrogen was 92% and 71% by Chlorella sp. and P. tricornutum, respectively. The fermentation effluents after UF and dilution were also used for cultivation of algae Scenedesmus sp. that is resistant to environmental conditions. The authors recommended the development of biorafinery based on the production of algae for the biogas production. There are examples of using a multi-stage membrane system to purify the liquid fraction from digestate. After the initial UF, RO is used to remove ammonium nitrogen and COD. To obtain a permeate with a concentration of ammonium nitrogen allowing to discharge it into the environment, it was necessary to apply three-stage RO. The composition of the permeate after two-stage RO was: COD 50–60 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 300–320 mg/dm³, total nitrogen 320–340 mg/dm³, total phosphorus 53 mg/dm³. However compostion of permeate after three-stage RO was: COD < 5 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 0 mg/dm³, total nitrogen 3.5 mg/dm³, total phosphorus < 0,05 mg/dm³. Last stage of RO might be replaced by ion exchange process. The negative aspect of membrane filtration systems is the fact that the permeate is about 50% of the introduced volume, the remainder is the retentate. The management of a retentate might involve recirculation to a biogas plant.

Keywords: digestate, membrane filtration, microalgae cultivation, Chlorella sp.

Procedia PDF Downloads 350
764 Becoming Vegan: The Theory of Planned Behavior and the Moderating Effect of Gender

Authors: Estela Díaz

Abstract:

This article aims to make three contributions. First, build on the literature on ethical decision-making literature by exploring factors that influence the intention of adopting veganism. Second, study the superiority of extended models of the Theory of Planned Behavior (TPB) for understanding the process involved in forming the intention of adopting veganism. Third, analyze the moderating effect of gender on TPB given that attitudes and behavior towards animals are gender-sensitive. No study, to our knowledge, has examined these questions. Veganism is not a diet but a political and moral stand that exclude, for moral reasons, the use of animals. Although there is a growing interest in studying veganism, it continues being overlooked in empirical research, especially within the domain of social psychology. TPB has been widely used to study a broad range of human behaviors, including moral issues. Nonetheless, TPB has rarely been applied to examine ethical decisions about animals and, even less, to veganism. Hence, the validity of TPB in predicting the intention of adopting veganism remains unanswered. A total of 476 non-vegan Spanish university students (55.6% female; the mean age was 23.26 years, SD= 6.1) responded to online and pencil-and-paper self-reported questionnaire based on previous studies. TPB extended models incorporated two background factors: ‘general attitudes towards humanlike-attributes ascribed to animals’ (AHA) (capacity for reason/emotions/suffer, moral consideration, and affect-towards-animals); and ‘general attitudes towards 11 uses of animals’ (AUA). SPSS 22 and SmartPLS 3.0 were used for statistical analyses. This study constructed a second-order reflective-formative model and took the multi-group analysis (MGA) approach to study gender effects. Six models of TPB (the standard and five competing) were tested. No a priori hypotheses were formulated. The results gave partial support to TPB. Attitudes (ATTV) (β = .207, p < .001), subjective norms (SNV) (β = .323, p < .001), and perceived control behavior (PCB) (β = .149, p < .001) had a significant direct effect on intentions (INTV). This model accounted for 27,9% of the variance in intention (R2Adj = .275) and had a small predictive relevance (Q2 = .261). However, findings from this study reveal that contrary to what TPB generally proposes, the effect of the background factors on intentions was not fully mediated by the proximal constructs of intentions. For instance, in the final model (Model#6), both factors had significant multiple indirect effect on INTV (β = .074, 95% C = .030, .126 [AHA:INTV]; β = .101, 95% C = .055, .155 [AUA:INTV]) and significant direct effect on INTV (β = .175, p < .001 [AHA:INTV]; β = .100, p = .003 [AUA:INTV]). Furthermore, the addition of direct paths from background factors to intentions improved the explained variance in intention (R2 = .324; R2Adj = .317) and the predictive relevance (Q2 = .300) over the base-model. This supports existing literature on the superiority of enhanced TPB models to predict ethical issues; which suggests that moral behavior may add additional complexity to decision-making. Regarding gender effect, MGA showed that gender only moderated the influence of AHA on ATTV (e.g., βWomen−βMen = .296, p < .001 [Model #6]). However, other observed gender differences (e.g. the explained variance of the model for intentions were always higher for men that for women, for instance, R2Women = .298; R2Men = .394 [Model #6]) deserve further considerations, especially for developing more effective communication strategies.

Keywords: veganism, Theory of Planned Behavior, background factors, gender moderation

Procedia PDF Downloads 344
763 Phase Synchronization of Skin Blood Flow Oscillations under Deep Controlled Breathing in Human

Authors: Arina V. Tankanag, Gennady V. Krasnikov, Nikolai K. Chemeris

Abstract:

The development of respiration-dependent oscillations in the peripheral blood flow may occur by at least two mechanisms. The first mechanism is related to the change of venous pressure due to mechanical activity of lungs. This phenomenon is known as ‘respiratory pump’ and is one of the mechanisms of venous return of blood from the peripheral vessels to the heart. The second mechanism is related to the vasomotor reflexes controlled by the respiratory modulation of the activity of centers of the vegetative nervous system. Early high phase synchronization of respiration-dependent blood flow oscillations of left and right forearm skin in healthy volunteers at rest was shown. The aim of the work was to study the effect of deep controlled breathing on the phase synchronization of skin blood flow oscillations. 29 normotensive non-smoking young women (18-25 years old) of the normal constitution without diagnosed pathologies of skin, cardiovascular and respiratory systems participated in the study. For each of the participants six recording sessions were carried out: first, at the spontaneous breathing rate; and the next five, in the regimes of controlled breathing with fixed breathing depth and different rates of enforced breathing regime. The following rates of controlled breathing regime were used: 0.25, 0.16, 0.10, 0.07 and 0.05 Hz. The breathing depth amounted to 40% of the maximal chest excursion. Blood perfusion was registered by laser flowmeter LAKK-02 (LAZMA, Russia) with two identical channels (wavelength 0.63 µm; emission power, 0.5 mW). The first probe was fastened to the palmar surface of the distal phalanx of left forefinger; the second probe was attached to the external surface of the left forearm near the wrist joint. These skin zones were chosen as zones with different dominant mechanisms of vascular tonus regulation. The degree of phase synchronization of the registered signals was estimated from the value of the wavelet phase coherence. The duration of all recording was 5 min. The sampling frequency of the signals was 16 Hz. The increasing of synchronization of the respiratory-dependent skin blood flow oscillations for all controlled breathing regimes was obtained. Since the formation of respiration-dependent oscillations in the peripheral blood flow is mainly caused by the respiratory modulation of system blood pressure, the observed effects are most likely dependent on the breathing depth. It should be noted that with spontaneous breathing depth does not exceed 15% of the maximal chest excursion, while in the present study the breathing depth was 40%. Therefore it has been suggested that the observed significant increase of the phase synchronization of blood flow oscillations in our conditions is primarily due to an increase of breathing depth. This is due to the enhancement of both potential mechanisms of respiratory oscillation generation: venous pressure and sympathetic modulation of vascular tone.

Keywords: deep controlled breathing, peripheral blood flow oscillations, phase synchronization, wavelet phase coherence

Procedia PDF Downloads 206
762 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model

Authors: S.I.Mukhin, S. Seidov, A. Mukherjee

Abstract:

The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.

Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity

Procedia PDF Downloads 130
761 Modeling of Hot Casting Technology of Beryllium Oxide Ceramics with Ultrasonic Activation

Authors: Zamira Sattinova, Tassybek Bekenov

Abstract:

The article is devoted to modeling the technology of hot casting of beryllium oxide ceramics. The stages of ultrasonic activation of beryllium oxide slurry in the plant vessel to improve the rheological property, hot casting in the moulding cavity with cooling and solidification of the casting are described. Thermoplastic slurry (hereinafter referred to as slurry) shows the rheology of a non-Newtonian fluid with yield and plastic viscosity. Cooling-solidification of the slurry in the forming cavity occurs in the liquid, taking into account crystallization and solid state. In this work is the method of calculation of hot casting of the slurry using the method of effective molecular viscosity of viscoplastic fluid. It is shown that the slurry near the cooled wall is in a state of crystallization and plasticity, and the rest may still be in the liquid phase. Nonuniform distribution of temperature, density and concentration of kinetically free binder takes place along the cavity section. This leads to compensation of shrinkage by the influx of slurry from the liquid into the crystallization zones and plasticity of the castings. In the plasticity zone, the shrinkage determined by the concentration of kinetically free binder is compensated under the action of the pressure gradient. The solidification mechanism, as well as the mechanical behavior of the casting mass during casting, the rheological and thermophysical properties of the thermoplastic BeO slurry due to ultrasound exposure have not been well studied. Nevertheless, experimental data allow us to conclude that the effect of ultrasonic vibrations on the slurry mass leads to it: a change in structure, an increase in technological properties, a decrease in heterogeneity and a change in rheological properties. In the course of experiments, the effect of ultrasonic treatment and its duration on the change in viscosity and ultimate shear stress of the slurry depending on temperature (55-75℃) and the mass fraction of the binder (10 - 11.7%) have been studied. At the same time, changes in these properties before and after ultrasound exposure have been analyzed, as well as the nature of the flow in the system under study. The experience of operating the unit with ultrasonic impact has shown that at the same time, the casting capacity of the slurry increases by an average of 15%, and the viscosity decreases by more than half. Experimental study of physicochemical properties and phase change with simultaneous consideration of all factors affecting the quality of products in the process of continuous casting is labor-intensive. Therefore, an effective way to control the physical processes occurring in the formation of articles with predetermined properties and shapes is to simulate the process and determine its basic characteristics. The results of the calculations show the whole stage of hot casting of beryllium oxide slurry, taking into account the change in its state of aggregation. Ultrasonic treatment improves rheological properties and increases the fluidity of the slurry in the forming cavity. Calculations show the influence of velocity, temperature factors and structural data of the cavity on the cooling-solidification process of the casting. In the calculations, conditions for molding with shrinkage of the slurry by hot casting have been found, which makes it possible to obtain a solidifying product with a uniform beryllium oxide structure at the outlet of the cavity.

Keywords: hot casting, thermoplastic slurry molding, shrinkage, beryllium oxide

Procedia PDF Downloads 11
760 A Case Study of Wildlife Crime in Bangladesh

Authors: M. Golam Rabbi

Abstract:

Theme of wildlife crime is unique in Bangladesh. In earlier of 2010, wildlife crime was not designated as a crime, unlike other offenses. Forest Department and other enforcement agencies were not in full swing to find out the organized crime scene at that time and recorded few cases along with forest crime. However, after the establishment of Wildlife Crime Control Unitin 2012a, total of 374 offenses have been detected with 566 offenders and 37,039 wildlife and trophies were seized till November 2016. Most offenses seem to be committed outside the forests where the presence of the forest staff is minimal. Total detection percentage of offenses is not known, but offenders are not identified in 60% of detected cases (UDOR). Only 20% cases are decided by the courts even after eight years, conviction rate of the total disposal is 70.65%. Mostly six months imprisonment and BDT 5000 fine seems to be the modal penalty. The monetary value of wildlife crime in the country is approximate $0.72M per year and the maximum value counted for reptiles around $0.45M especially for high-level trafficking of geckos and turtles. The most common seizures of wildlife are birds (mynas, munias, parakeets, lorikeets, water birds, etc.) which have domestic demand for pet. Some other wildlife like turtles, lizards and small mammals are also on the list. Venison and migratory waterbirds often seized which has a large quantity demand for consuming at aristocratic level.Due to porous border and weak enforcement in border region poachers use the way for trafficking of geckos, turtles, and tortoises, snakes, venom, tiger and body parts, spotted deerskin, pangolinetc. Those have very high demand in East Asian countries for so-called medicinal purposes. The recent survey also demonstrates new route for illegal trade and trafficking for instance, after poaching of tiger and deer from the Sundarbans, the largest mangrove track of the planet to Thailand through the Bay of Bengal, sharks fins and ray fish through Chittagong seaport and directly by sea routes to Myanmar and Thailand. However, a good number of records of offense demonstrate the transition route from India to South and South East Asian countries. Star tortoises and Hamilton’s turtles are smuggled in from India which mostly seized at Benapole border of Jessore and Hazrat Shah Jajal International Airport of Dhaka, in very large numbers for transmission to East Asian countries. Most of the cases of wildlife trade routes leading to China, Thailand, Malaysia, and Myanmar. Most surprisingly African ivory was seized in Bangladesh recently, which was meant to be trafficked to the South-East Asia. However; forest department is working to fight against wildlife poaching, illegal trade and trafficking in collaboration with other law enforcement agencies. The department needs a clear mandate and to build technical capabilities for identifying, seizing and holding specimens. The department also needs to step out of the forests and must develop the capacity to surveillance and patrol all sensitive locations across the country.

Keywords: Bangladesh forest department, Sundarban, tiger, wildlife crime, wildlife trafficking

Procedia PDF Downloads 302
759 Fluctuations in Radical Approaches to State Ownership of the Means of Production Over the Twentieth Century

Authors: Tom Turner

Abstract:

The recent financial crisis in 2008 and the growing inequality in developed industrial societies would appear to present significant challenges to capitalism and the free market. Yet there have been few substantial mainstream political or economic challenges to the dominant capitalist and market paradigm to-date. There is no dearth of critical and theoretical (academic) analyses regarding the prevailing systems failures. Yet despite the growing inequality in the developed industrial societies and the financial crisis in 2008 few commentators have advocated the comprehensive socialization or state ownership of the means of production to our knowledge – a core principle of radical Marxism in the 19th and early part of the 20th century. Undoubtedly the experience in the Soviet Union and satellite countries in the 20th century has cast a dark shadow over the notion of centrally controlled economies and state ownership of the means of production. In this paper, we explore the history of a doctrine advocating the socialization or state ownership of the means of production that was central to Marxism and socialism generally. Indeed this doctrine provoked an intense and often acrimonious debate especially for left-wing parties throughout the 20th century. The debate within the political economy tradition has historically tended to divide into a radical and a revisionist approach to changing or reforming capitalism. The radical perspective views the conflict of interest between capital and labor as a persistent and insoluble feature of a capitalist society and advocates the public or state ownership of the means of production. Alternatively, the revisionist perspective focuses on issues of distribution rather than production and emphasizes the possibility of compromise between capital and labor in capitalist societies. Over the 20th century, the radical perspective has faded and even the social democratic revisionist tradition has declined in recent years. We conclude with the major challenges that confront both the radical and revisionist perspectives in the development of viable policy agendas in mature developed democratic societies. Additionally, we consider whether state ownership of the means of production still has relevance in the 21st century and to what extent state ownership is off the agenda as a political issue in the political mainstream in developed industrial societies. A central argument in the paper is that state ownership of the means of production is unlikely to feature as either a practical or theoretical solution to the problems of capitalism post the financial crisis among mainstream political parties of the left. Although the focus here is solely on the shifting views of the radical and revisionist socialist perspectives in the western European tradition the analysis has relevance for the wider socialist movement.

Keywords: sate ownership, ownership means of production, radicals, revisionists

Procedia PDF Downloads 115
758 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 14
757 Energy Atlas: Geographic Information Systems-Based Energy Analysis and Planning Tool

Authors: Katarina Pogacnik, Ursa Zakrajsek, Nejc Sirk, Ziga Lampret

Abstract:

Due to an increase in living standards along with global population growth and a trend of urbanization, municipalities and regions are faced with an ever rising energy demand. A challenge has arisen for cities around the world to modify the energy supply chain in order to reduce its consumption and CO₂ emissions. The aim of our work is the development of a computational-analytical platform for dynamic support in decision-making and the determination of economic and technical indicators of energy efficiency in a smart city, named Energy Atlas. Similar products in this field focuse on a narrower approach, whereas in order to achieve its aim, this platform encompasses a wider spectrum of beneficial and important information for energy planning on a local or regional scale. GIS based interactive maps provide an extensive database on the potential, use and supply of energy and renewable energy sources along with climate, transport and spatial data of the selected municipality. Beneficiaries of Energy atlas are local communities, companies, investors, contractors as well as residents. The Energy Atlas platform consists of three modules named E-Planning, E-Indicators and E-Cooperation. The E-Planning module is a comprehensive data service, which represents a support towards optimal decision-making and offers a sum of solutions and feasibility of measures and their effects in the area of efficient use of energy and renewable energy sources. The E-Indicators module identifies, collects and develops optimal data and key performance indicators and develops an analytical application service for dynamic support in managing a smart city in regards to energy use and sustainable environment. In order to support cooperation and direct involvement of citizens of the smart city, the E-cooperation is developed with the purpose of integrating the interdisciplinary and sociological aspects of energy end-users. Interaction of all the above-described modules contributes to regional development because it enables for a precise assessment of the current situation, strategic planning, detection of potential future difficulties and also the possibility of public involvement in decision-making. From the implementation of the technology in Slovenian municipalities of Ljubljana, Piran, and Novo mesto, there is evidence to suggest that the set goals are to be achieved to a great extent. Such thorough urban energy planning tool is viewed as an important piece of the puzzle towards achieving a low-carbon society, circular economy and therefore, sustainable society.

Keywords: circular economy, energy atlas, energy management, energy planning, low-carbon society

Procedia PDF Downloads 302
756 Knowledge Management Processes as a Driver of Knowledge-Worker Performance in Public Health Sector of Pakistan

Authors: Shahid Razzaq

Abstract:

The governments around the globe have started taking into considerations the knowledge management dynamics while formulating, implementing, and evaluating the strategies, with or without the conscious realization, for the different public sector organizations and public policy developments. Health Department of Punjab province in Pakistan is striving to deliver quality healthcare services to the community through an efficient and effective service delivery system. Despite of this struggle some employee performance issues yet exists in the form of challenge to government. To overcome these issues department took several steps including HR strategies, use of technologies and focus of hard issues. Consequently, this study was attempted to highlight the importance of soft issue that is knowledge management in its true essence to tackle their performance issues. Knowledge management in public sector is quite an ignored area in the knowledge management-a growing multidisciplinary research discipline. Knowledge-based view of the firm theory asserts the knowledge is the most deliberate resource that can result in competitive advantage for an organization over the other competing organizations. In the context of our study it means for gaining employee performance, organizations have to increase the heterogeneous knowledge bases. The study uses the cross-sectional and quantitative research design. The data is collected from the knowledge workers of Health Department of Punjab, the biggest province of Pakistan. A total of 341 sample size is achieved. The SmartPLS 3 Version 2.6 is used for analyzing the data. The data examination revealed that knowledge management processes has a strong impact on knowledge worker performance. All hypotheses are accepted according to the results. Therefore, it can be summed up that to increase the employee performance knowledge management activities should be implemented. Health Department within province of Punjab introduces the knowledge management infrastructure and systems to make effective availability of knowledge for the service staff. This knowledge management infrastructure resulted in an increase in the knowledge management process in different remote hospitals, basic health units and care centers which resulted in greater service provisions to public. This study is to have theoretical and practical significances. In terms of theoretical contribution, this study is to establish the relationship between knowledge management and performance for the first time. In case of the practical contribution, this study is to give an insight to public sector organizations and government about role of knowledge management in employ performance. Therefore, public policymakers are strongly advised to implement the activities of knowledge management for enhancing the performance of knowledge workers. The current research validated the substantial role of knowledge management in persuading and creating employee arrogances and behavioral objectives. To the best of authors’ knowledge, this study contribute to the impact of knowledge management on employee performance as its originality.

Keywords: employee performance, knowledge management, public sector, soft issues

Procedia PDF Downloads 137
755 The Effects of the GAA15 (Gaelic Athletic Association 15) on Lower Extremity Injury Incidence and Neuromuscular Functional Outcomes in Collegiate Gaelic Games: A 2 Year Prospective Study

Authors: Brenagh E. Schlingermann, Clare Lodge, Paula Rankin

Abstract:

Background: Gaelic football, hurling and camogie are highly popular field games in Ireland. Research into the epidemiology of injury in Gaelic games revealed that approximately three quarters of the injuries in the games occur in the lower extremity. These injuries can have player, team and institutional impacts due to multiple factors including financial burden and time loss from competition. Research has shown it is possible to record injury data consistently with the GAA through a closed online recording system known as the GAA injury surveillance database. It has been established that determining the incidence of injury is the first step of injury prevention. The goals of this study were to create a dynamic GAA15 injury prevention programme which addressed five key components/goals; avoid positions associated with a high risk of injury, enhance flexibility, enhance strength, optimize plyometrics and address sports specific agilities. These key components are internationally recognized through the Prevent Injury, Enhance performance (PEP) programme which has proven reductions in ACL injuries by 74%. In national Gaelic games the programme is known as the GAA15 which has been devised from the principles of the PEP. No such injury prevention strategies have been published on this cohort in Gaelic games to date. This study will investigate the effects of the GAA15 on injury incidence and neuromuscular function in Gaelic games. Methods: A total of 154 players (mean age 20.32 ± 2.84) were recruited from the GAA teams within the Institute of Technology Carlow (ITC). Preseason and post season testing involved two objective screening tests; Y balance test and Three Hop Test. Practical workshops, with ongoing liaison, were provided to the coaches on the implementation of the GAA15. The programme was performed before every training session and game and the existing GAA injury surveillance database was accessed to monitor player’s injuries by the college sports rehabilitation athletic therapist. Retrospective analysis of the ITC clinic records were performed in conjunction with the database analysis as a means of tracking injuries that may have been missed. The effects of the programme were analysed by comparing the intervention groups Y balance and three hop test scores to an age/gender matched control group. Results: Year 1 results revealed significant increases in neuromuscular function as a result of the GAA15. Y Balance test scores for the intervention group increased in both the posterolateral (p=.005 and p=.001) and posteromedial reach directions (p= .001 and p=.001). A decrease in performance was determined for the three hop test (p=.039). Overall twenty-five injuries were reported during the season resulting in an injury rate of 3.00 injuries/1000hrs of participation; 1.25 injuries/1000hrs training and 4.25 injuries/1000hrs match play. Non-contact injuries accounted for 40% of the injuries sustained. Year 2 results are pending and expected April 2016. Conclusion: It is envisaged that implementation of the GAA15 will continue to reduce the risk of injury and improve neuromuscular function in collegiate Gaelic games athletes.

Keywords: GAA15, Gaelic games, injury prevention, neuromuscular training

Procedia PDF Downloads 332
754 Correlation of Hyperlipidemia with Platelet Parameters in Blood Donors

Authors: S. Nishat Fatima Rizvi, Tulika Chandra, Abbas Ali Mahdi, Devisha Agarwal

Abstract:

Introduction: Blood components are an unexplored area prone to numerous discoveries which influence patient’s care. Experiments at different levels will further change the present concept of blood banking. Hyperlipidemia is a condition of elevated plasma level of low-density lipoprotein (LDL) as well as decreased plasma level of high-density lipoprotein (HDL). Studies show that platelets play a vital role in the progression of atherosclerosis and thrombosis, a major cause of death worldwide. They are activated by many triggers like elevated LDL in the blood resulting in aggregation and formation of plaques. Hyperlipidemic platelets are frequently transfused to patients with various disorders. Screening the random donor platelets for hyperlipidemia and correlating the condition with other donor criteria such as lipid rich diet, oral contraceptive pills intake, weight, alcohol intake, smoking, sedentary lifestyle, family history of heart diseases will lead to further deciding the exclusion criteria for donor selection. This will help in making the patients safe as well as the donor deferral criteria more stringent to improve the quality of blood supply. Technical evaluation and assessment will enable blood bankers to supply safe blood and improve the guidelines for blood safety. Thus, we try to study the correlation between hyperlipidemic platelets with platelets parameters, weight, and specific history of the donors. Methodology: This case control study included 100 blood samples of Blood donors, out of 100 only 30 samples were found to be hyperlipidemic and were included as cases, while rest were taken as controls. Lipid Profile were measured by fully automated analyzer (TRIGL:triglycerides),(LDL-C:LDL –Cholesterol plus 2nd generation),CHOL 2: Cholesterol Gen 2), HDL C 3: HDL-Cholesterol plus 3rdgeneration)-(Cobas C311-Roche Diagnostic).And Platelets parameters were analyzed by the Sysmex KX21 automated hematology analyzer. Results: A significant correlation was found amongst hyperlipidemic level in single time donor. In which 80% donors have history of heart disease, 66.66% donors have sedentary life style, 83.3% donors were smokers, 50% donors were alcoholic, and 63.33% donors had taken lipid rich diet. Active physical activity was found amongst 40% donors. We divided donors sample in two groups based on their body weight. In group 1, hyperlipidemic samples: Platelet Parameters were 75% in normal 25% abnormal in >70Kg weight while in 50-70Kg weight 90% were normal 10% were abnormal. In-group 2, Non Hyperlipidemic samples: platelet Parameters were 95% normal and 5% abnormal in >70Kg weight, while in 50-70Kg Weight, 66.66% normal and 33.33% abnormal. Conclusion: The findings indicate that Hyperlipidemic status of donors may affect the platelet parameters and can be distinguished on history by their weight, Smoking, Alcoholic intake, Sedentary lifestyle, Active physical activity, Lipid rich diet, Oral contraceptive pills intake, and Family history of heart disease. However further studies on a large sample size will affirm this finding.

Keywords: blood donors, hyperlipidemia, platelet, weight

Procedia PDF Downloads 308
753 Recent Advances in the Valorization of Goat Milk: Nutritional Properties and Production Sustainability

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Goat dairy products are gaining popularity worldwide. In developing countries, but also in many marginal regions of the Mediterranean area, goats represent a great part of the economy and ensure food security. In fact, these small ruminants are able to convert efficiently poor weedy plants and small trees into traditional products of high nutritional quality, showing great resilience to different climatic and environmental conditions. In developed countries, goat milk is appreciated for the presence of health-promoting compounds, bioactive compounds such as conjugated linoleic acids, oligosaccharides, sphingolipids and polyammines. This paper focuses on the recent advances in literature on the nutritional properties of goat milk and on innovative techniques to improve its quality as to become a promising functional food. The environmental sustainability of different methodologies of production has also been examined. Goat milk is valued today as a food of high nutritional value and functional properties as well as small environmental footprint. It is widely consumed in many countries due to high nutritional value, lower allergenic potential, and better digestibility when compared to bovine milk, that makes this product suitable for infants, elderly or sensitive patients. The main differences in chemical composition between a cow and goat milk rely on fat globules that in goat milk are smaller and in fatty acids that present a smaller chain length, while protein, fat, and lactose concentration are comparable. Milk nutritional properties have demonstrated to be strongly influenced by animal diet, genotype, and welfare, but also by season and production systems. Furthermore, there is a growing interest in the dairy industry in goat milk for its relatively high concentration of prebiotics and a good amount of probiotics, which have recently gained importance for their therapeutic potential. Therefore, goat milk is studied as a promising matrix to develop innovative functional foods. In addition to the economic and nutritional value, goat milk is considered a sustainable product for its small environmental footprint, as they require relatively little water and land, and less medical treatments, compared to cow, these characteristics make its production naturally vocated to organic farming. Organic goat milk production has becoming more and more interesting both for farmers and consumers as it can answer to several concerns like environment protection, animal welfare and economical sustainment of rural populations living in marginal lands. These evidences make goat milk an ancient food with novel properties and advantages to be valorized and exploited.

Keywords: goat milk, nutritional quality, bioactive compounds, sustainable production, animal welfare

Procedia PDF Downloads 145
752 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 74
751 Index and Mechanical Geotechnical Properties and Their Control on the Strength and Durability of the Cainozoic Calcarenites in KwaZulu-Natal, South Africa

Authors: Luvuno N. Jele, Warwick W. Hastie, Andrew Green

Abstract:

Calcarenite is a clastic sedimentary beach rock composed of more than 50% sand sized (0.0625 – 2 mm) carbonate grains. In South Africa, these rocks occur as a narrow belt along most of the coast of KwaZulu-Natal and sporadically along the coast of the Eastern Cape. Calcarenites contain a high percentage of calcium carbonate, and due to a number of its physical and structural features, like porosity, cementing material, sedimentary structures, grain shape, and grain size; they are more prone to chemical and mechanical weathering. The objective of the research is to study the strength and compressibility characteristics of the calcarenites along the coast of KwaZulu-Natal to be able to better understand the geotechnical behaviour of these rocks, which may help to predict areas along the coast which may be potentially susceptible to failure/differential settling resulting in damage to property. A total of 148 cores were prepared and analyzed. Cores were analyzed perpendicular and parallel to bedding. Tests were carried out in accordance with the relevant codes and recommendations of the International Society for Rock Mechanics, American Standard Testing Methods, and Committee of Land and Transport Standard Specifications for Road and Bridge Works for State Road Authorities. Test carried out included: x-ray diffraction, petrography, shape preferred orientation (SPO), 3-D Tomography, rock porosity, rock permeability, ethylene glycol, slake durability, rock water absorption, Duncan swelling index, triaxial compressive strength, Brazilian tensile strength and uniaxial compression test with elastic modulus. The beach-rocks have a uniaxial compressive strength (UCS) ranging from 17,84Mpa to 287,35Mpa and exhibit three types of failure; (1) single sliding shear failure, (2) complete cone development, and (3) splitting failure. Brazilian tensile strength of the rocks ranges from 2.56 Mpa to 12,40 Ma, with those tested perpendicular to bedding showing lower tensile strength. Triaxial compressive tests indicate calcarenites have strength ranging from 86,10 Mpa to 371,85 Mpa. Common failure mode in the triaxial test is a single sliding shear failure. Porosity of the rocks varies from 1.25 % to 26.52 %. Rock tests indicate that the direction of loading, whether it be parallel to bedding or perpendicular to bedding, plays no significantrole in the strength and durability of the calcarenites. Porosity, cement type, and grain texture play major roles.UCS results indicate that saturated cores are weaker in strength compared to dry samples. Thus, water or moisture content plays a significant role in the strength and durability of the beach-rock. Loosely packed, highly porous and low magnesian-calcite bearing calcarenites show a decrease in strength compared to the densely packed, low porosity and high magnesian-calcite bearing calcarenites.

Keywords: beach-rock, calcarenite, cement, compressive, failure, porosity, strength, tensile, grains

Procedia PDF Downloads 91
750 Case-Based Options Counseling Panel To Supplement An Indiana Medical School’s Pre-Clinical Family Planning and Abortion Education Curriculum

Authors: Alexandra McKinzie, Lucy Brown, Sarah Komanapalli, Sarah Swiezy, Caitlin Bernard

Abstract:

Background: While 25% of US women will seek an abortion before age 45, targeted laws have led to a decline in abortion clinics, subsequently leaving 96% of Indiana counties and the 70% of Hoosier women residing in these counties without access to services they desperately need.1,2 Despite the need for a physician workforce that is educated and able to provide full-spectrum reproductive health care, few medical institutions have a standardized family planning and abortion pre-clinical curriculum. Methods: A Qualtrics survey was disseminated to students from Indiana University School of Medicine (IUSM) to evaluate (1) student interest in curriculum reform, (2) self-assessed preparedness to counsel on contraceptive and pregnancy options, and (3) preferred modality of instruction for family planning and abortion topics. Based on the pre-panel survey feedback, a case-based pregnancy options counseling panel will be implemented in the students’ pre-clinical, didactic course Endocrine, Reproductive, Musculoskeletal, Dermatologic Systems (ERMD) in February 2022. A Qualtrics post-panel survey will be disseminated to evaluate students’ perceived efficacy and quality of the panel, as well as their self-assessed preparedness to counsel on pregnancy options. Results: Participants in the pre-panel survey (n=303) were primarily female (61.72%) and White (74.43%). Across all class levels, many (60.80%) students expected to learn about family planning and abortion in their pre-clinical education. While most (84-88%) participants felt prepared to counsel about common, non-controversial pharmacotherapies (e.g. beta-blockers and diuretics), only 20% of students felt prepared to counsel on abortion options. Overall, 85.67% of students believed that IUSM should enhance its reproductive health coverage in pre-clinical, didactic courses. Traditional lectures, panels, and direct clinical exposure were the most popular instructional modalities. Expected Results: The authors predict that following the panel, students will indicate improved confidence in providing pregnancy options counseling. Additionally, students will provide constructive feedback on the structure and content of the panel for incorporation into future years’ curriculum. Conclusions: IUSM students overwhelmingly expressed interest in expanding their pre-clinical curriculum’s coverage of family planning and abortion topics. To specifically improve students’ self-assessed preparedness to provide pregnancy options counseling and address students’ self-cited learning gaps, a case-based provider panel session will be implemented in response to students’ preferred modality feedback.

Keywords: options counseling, family planning, abortion, curriculum reform, case-based panel

Procedia PDF Downloads 139
749 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada

Authors: Amy Buckley

Abstract:

The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.

Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism

Procedia PDF Downloads 72
748 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 155
747 CD97 and Its Role in Glioblastoma Stem Cell Self-Renewal

Authors: Niklas Ravn-Boess, Nainita Bhowmick, Takamitsu Hattori, Shohei Koide, Christopher Park, Dimitris Placantonakis

Abstract:

Background: Glioblastoma (GBM) is the most common and deadly primary brain malignancy in adults. Tumor propagation, brain invasion, and resistance to therapy critically depend on GBM stem-like cells (GSCs); however, the mechanisms that regulate GSC self-renewal are incompletely understood. Given the aggressiveness and poor prognosis of GBM, it is imperative to find biomarkers that could also translate into novel drug targets. Along these lines, we have identified a cell surface antigen, CD97 (ADGRE5), an adhesion G protein-coupled receptor (GPCR), that is expressed on GBM cells but is absent from non-neoplastic brain tissue. CD97 has been shown to promote invasiveness, angiogenesis, and migration in several human cancers, but its frequency of expression and functional role in regulating GBM growth and survival, and its potential as a therapeutic target has not been investigated. Design: We assessed CD97 mRNA and protein expression in patient derived GBM samples and cell lines using publicly available RNA-sequencing datasets and flow cytometry, respectively. To assess CD97 function, we generated shRNA lentiviral constructs that target a sequence in the CD97 extracellular domain (ECD). A scrambled shRNA (scr) with no predicted targets in the genome was used as a control. We evaluated CD97 shRNA lentivirally transduced GBM cells for Ki67, Annexin V, and DAPI. We also tested CD97 KD cells for their ability to self-renew using clonogenic tumorsphere formation assays. Further, we utilized synthetic Abs (sAbs) generated against the ECD of CD97 to test for potential antitumor effects using patient-derived GBM cell lines. Results: CD97 mRNA expression was expressed at high levels in all GBM samples available in the TCGA cohort. We found high levels of surface CD97 protein expression in 6/6 patient-derived GBM cell cultures, but not human neural stem cells. Flow cytometry confirmed downregulation of CD97 in CD97 shRNA lentivirally transduced cells. CD97 KD induced a significant reduction in cell growth in 3 independent GBM cell lines representing mesenchymal and proneural subtypes, which was accompanied by reduced (~20%) Ki67 staining and increased (~30%) apoptosis. Incubation of GBM cells with sAbs (20 ug/ ml) against the ECD of CD97 for 3 days induced GSC differentiation, as determined by the expression of GFAP and Tubulin. Using three unique GBM patient derived cultures, we found that CD97 KD attenuated the ability of GBM cells to initiate sphere formation by over 300 fold, consistent with an impairment in GSC self-renewal. Conclusion: Loss of CD97 expression in patient-derived GBM cells markedly decreases proliferation, induces cell death, and reduces tumorsphere formation. sAbs against the ECD of CD97 reduce tumorsphere formation, recapitulating the phenotype of CD97 KD, suggesting that sAbs that inhibit CD97 function exhibit anti-tumor activity. Collectively, these findings indicate that CD97 is necessary for the proliferation and survival of human GBM cells and identify CD97 as a promising therapeutically targetable vulnerability in GBM.

Keywords: adhesion GPCR, CD97, GBM stem cell, glioblastoma

Procedia PDF Downloads 133
746 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data

Authors: Sara Bonetti

Abstract:

The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.

Keywords: data literacy, early childhood professionals, intersectionality, quantitative data

Procedia PDF Downloads 250
745 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases

Authors: Rainner Roweder

Abstract:

Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.

Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil

Procedia PDF Downloads 107
744 Recognition by the Voice and Speech Features of the Emotional State of Children by Adults and Automatically

Authors: Elena E. Lyakso, Olga V. Frolova, Yuri N. Matveev, Aleksey S. Grigorev, Alexander S. Nikolaev, Viktor A. Gorodnyi

Abstract:

The study of the children’s emotional sphere depending on age and psychoneurological state is of great importance for the design of educational programs for children and their social adaptation. Atypical development may be accompanied by violations or specificities of the emotional sphere. To study characteristics of the emotional state reflection in the voice and speech features of children, the perceptual study with the participation of adults and the automatic recognition of speech were conducted. Speech of children with typical development (TD), with Down syndrome (DS), and with autism spectrum disorders (ASD) aged 6-12 years was recorded. To obtain emotional speech in children, model situations were created, including a dialogue between the child and the experimenter containing questions that can cause various emotional states in the child and playing with a standard set of toys. The questions and toys were selected, taking into account the child’s age, developmental characteristics, and speech skills. For the perceptual experiment by adults, test sequences containing speech material of 30 children: TD, DS, and ASD were created. The listeners were 100 adults (age 19.3 ± 2.3 years). The listeners were tasked with determining the children’s emotional state as “comfort – neutral – discomfort” while listening to the test material. Spectrographic analysis of speech signals was conducted. For automatic recognition of the emotional state, 6594 speech files containing speech material of children were prepared. Automatic recognition of three states, “comfort – neutral – discomfort,” was performed using automatically extracted from the set of acoustic features - the Geneva Minimalistic Acoustic Parameter Set (GeMAPS) and the extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS). The results showed that the emotional state is worse determined by the speech of TD children (comfort – 58% of correct answers, discomfort – 56%). Listeners better recognized discomfort in children with ASD and DS (78% of answers) than comfort (70% and 67%, respectively, for children with DS and ASD). The neutral state is better recognized by the speech of children with ASD (67%) than by the speech of children with DS (52%) and TD children (54%). According to the automatic recognition data using the acoustic feature set GeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.687; children with DS – 0.725; TD children – 0.641. When using the acoustic feature set eGeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.671; children with DS – 0.717; TD children – 0.631. The use of different models showed similar results, with better recognition of emotional states by the speech of children with DS than by the speech of children with ASD. The state of comfort is automatically determined better by the speech of TD children (precision – 0.546) and children with ASD (0.523), discomfort – children with DS (0.504). The data on the specificities of recognition by adults of the children’s emotional state by their speech may be used in recruitment for working with children with atypical development. Automatic recognition data can be used to create alternative communication systems and automatic human-computer interfaces for social-emotional learning. Acknowledgment: This work was financially supported by the Russian Science Foundation (project 18-18-00063).

Keywords: autism spectrum disorders, automatic recognition of speech, child’s emotional speech, Down syndrome, perceptual experiment

Procedia PDF Downloads 186