Search results for: Digital Image Correlation (DIC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8877

Search results for: Digital Image Correlation (DIC)

537 Paramedic Strength and Flexibility: Findings of a 6-Month Workplace Exercise Randomised Controlled Trial

Authors: Jayden R. Hunter, Alexander J. MacQuarrie, Samantha C. Sheridan, Richard High, Carolyn Waite

Abstract:

Workplace exercise programs have been recommended to improve the musculoskeletal fitness of paramedics with the aim of reducing injury rates, and while they have shown efficacy in other occupations, they have not been delivered and evaluated in Australian paramedics to our best knowledge. This study investigated the effectiveness of a 6-month workplace exercise program (MedicFit; MF) to improve paramedic fitness with or without health coach (HC) support. A group of regional Australian paramedics (n=76; 43 male; mean ± SD 36.5 ± 9.1 years; BMI 28.0 ± 5.4 kg/m²) were randomised at the station level to either exercise with remote health coach support (MFHC; n=30), exercise without health coach support (MF; n=23), or no-exercise control (CON; n=23) groups. MFHC and MF participants received a 6-month, low-moderate intensity resistance and flexibility exercise program to be performed ƒ on station without direct supervision. Available exercise equipment included dumbbells, resistance bands, Swiss balls, medicine balls, kettlebells, BOSU balls, yoga mats, and foam rollers. MFHC and MF participants were also provided with a comprehensive exercise manual including sample exercise sessions aimed at improving musculoskeletal strength and flexibility which included exercise prescription (i.e. sets, reps, duration, load). Changes to upper-body (push-ups), lower-body (wall squat) and core (plank hold) strength and flexibility (back scratch and sit-reach tests) after the 6-month intervention were analysed using repeated measures ANOVA to compare changes between groups and over time. Upper-body (+20.6%; p < 0.01; partial eta squared = 0.34 [large effect]) and lower-body (+40.8%; p < 0.05; partial eta squared = 0.08 (moderate effect)) strength increased significantly with no interaction or group effects. Changes to core strength (+1.4%; p=0.17) and both upper-body (+19.5%; p=0.56) and lower-body (+3.3%; p=0.15) flexibility were non-significant with no interaction or group effects observed. While upper- and lower-body strength improved over the course of the intervention, providing a 6-month workplace exercise program with or without health coach support did not confer any greater strength or flexibility benefits than exercise testing alone (CON). Although exercise adherence was not measured, it is possible that participants require additional methods of support such as face-to-face exercise instruction and guidance and individually-tailored exercise programs to achieve adequate participation and improvements in musculoskeletal fitness. This presents challenges for more remote paramedic stations without regular face-to-face access to suitably qualified exercise professionals, and future research should investigate the effectiveness of other forms of exercise delivery and guidance for these paramedic officers such as remotely-facilitated digital exercise prescription and monitoring.

Keywords: workplace exercise, paramedic health, strength training, flexibility training

Procedia PDF Downloads 139
536 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 99
535 CSR Communication Strategies: Stakeholder and Institutional Theories Perspective

Authors: Stephanie Gracelyn Rahaman, Chew Yin Teng, Manjit Singh Sandhu

Abstract:

Corporate scandals have made stakeholders apprehensive of large companies and expect greater transparency in CSR matters. However, companies find it challenging to strategically communicate CSR to intended stakeholders and in the process may fall short on maximizing on CSR efforts. Given that stakeholders have the ability to either reward good companies or take legal action or boycott against corporate brands who do not act socially responsible, companies must create shared understanding of their CSR activities. As a result, communication has become a strategy for many companies to demonstrate CSR engagement and to minimize stakeholder skepticism. The main objective of this research is to examine the types of CSR communication strategies and predictors that guide CSR communication strategies. Employing Morsing & Schultz’s guide on CSR communication strategies, the study integrates stakeholder and institutional theory to develop a conceptual framework. The conceptual framework hypothesized that stakeholder (instrumental and normative) and institutional (regulatory environment, nature of business, mimetic intention, CSR focus and corporate objectives) dimensions would drive CSR communication strategies. Preliminary findings from semi-structured interviews in Malaysia are consistent with the conceptual model in that stakeholder and institutional expectations guide CSR communication strategies. Findings show that most companies use two-way communication strategies. Companies that identified employees, the public or customers as key stakeholders have started to embrace social media to be in-sync with new trends of communication. This is especially with the Gen Y which is their priority. Some companies creatively use multiple communication channels because they recognize different stakeholders favor different communication channels. Therefore, it appears that companies use two-way communication strategies to complement the perceived limitation of one-way communication strategies as some companies prefer a more interactive platform to strategically engage stakeholders in CSR communication. In addition to stakeholders, institutional expectations also play a vital role in influencing CSR communication. Due to industry peer pressures, corporate objectives (attract international investors and customers), companies may be more driven to excel in social performance. For these reasons companies tend to go beyond the basic mandatory requirement, excel in CSR activities and be known as companies that champion CSR. In conclusion, companies use more two-way than one-way communication and companies use a combination of one and two-way communication to target different stakeholders resulting from stakeholder and institutional dimensions. Finally, in order to find out if the conceptual framework actually fits the Malaysian context, companies’ responses for expected organizational outcomes from communicating CSR were gathered from the interview transcripts. Thereafter, findings are presented to show some of the key organizational outcomes (visibility and brand recognition, portray responsible image, attract prospective employees, positive word-of-mouth, etc.) that companies in Malaysia expect from CSR communication. Based on these findings the conceptual framework has been refined to show the new identified organizational outcomes.

Keywords: CSR communication, CSR communication strategies, stakeholder theory, institutional theory, conceptual framework, Malaysia

Procedia PDF Downloads 289
534 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage

Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni

Abstract:

Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.

Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage

Procedia PDF Downloads 123
533 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density

Authors: Jill Round

Abstract:

In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.

Keywords: risk management, structural equation model, supervisory practice, three lines of defense

Procedia PDF Downloads 224
532 Internet Memes as Meaning-Making Tools within Subcultures: A Case Study of Lolita Fashion

Authors: Victoria Esteves

Abstract:

Online memes have not only impacted different aspects of culture, but they have also left their mark on particular subcultures, where memes have reflected issues and debates surrounding specific spheres of interest. This is the first study that outlines how memes can address cultural intersections within the Lolita fashion community, which are much more specific and which fall outside of the broad focus of politics and/or social commentary. This is done by looking at the way online memes are used in this particular subculture as a form of meaning-making and group identity reinforcement, demonstrating not only the adaptability of online memes to specific cultural groups but also how subcultures tailor these digital objects to discuss both community-centered topics and more broad societal aspects. As part of an online ethnography, this study focuses on qualitative content analysis by taking a look at some of the meme communication that has permeated Lolita fashion communities. Examples of memes used in this context are picked apart in order to understand this specific layered phenomenon of communication, as well as to gain insights into how memes can operate as visual shorthand for the remix of meaning-making. There are existing parallels between internet culture and cultural behaviors surrounding Lolita fashion: not only is the latter strongly influenced by the former (due to its highly globalized dispersion and lack of physical shops, Lolita fashion is almost entirely reliant on the internet for its existence), both also emphasize curatorial roles through a careful collaborative process of documenting significant aspects of their culture (e.g., Know Your Meme and Lolibrary). Further similarities appear when looking at ideas of inclusion and exclusion that permeate both cultures, where memes and language are used in order to both solidify group identity and to police those who do not ascribe to these cultural tropes correctly, creating a feedback loop that reinforces subcultural ideals. Memes function as excellent forms of communication within the Lolita community because they reinforce its coded ideas and allows a kind of participation that echoes other cultural groups that are online-heavy such as fandoms. Furthermore, whilst the international Lolita community was mostly self-contained within its LiveJournal birthplace, it has become increasingly dispersed through an array of different social media groups that have fragmented this subculture significantly. The use of memes is key in maintaining a sense of connection throughout this now fragmentary experience of fashion. Memes are also used in the Lolita fashion community to bridge the gap between Lolita fashion related community issues and wider global topics; these reflect not only an ability to make use of a broader online language to address specific issues of the community (which in turn provide a very community-specific engagement with remix practices) but also memes’ ability to be tailored to accommodate overlapping cultural and political concerns and discussions between subcultures and broader societal groups. Ultimately, online memes provide the necessary elasticity to allow their adaption and adoption by subcultural groups, who in turn use memes to extend their meaning-making processes.

Keywords: internet culture, Lolita fashion, memes, online community, remix

Procedia PDF Downloads 168
531 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach

Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton

Abstract:

Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.

Keywords: competition, growth, model, thinning

Procedia PDF Downloads 128
530 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures

Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny

Abstract:

Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.

Keywords: pyrolysis, torrefaction, biooil, lignin

Procedia PDF Downloads 329
529 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 352
528 Understanding Project Failures in Construction: The Critical Impact of Financial Capacity

Authors: Nnadi Ezekiel Oluwaseun Ejiofor

Abstract:

This research investigates the effects of poor cost estimation, material cost variations, and payment punctuality on the financial health and execution of construction projects in Nigeria. To achieve the objectives of the study, a quantitative research approach was employed, and data was gathered through an online survey of 74 construction industry professionals consisting of quantity surveyors, contractors, and other professionals. The study surveyed input on cost estimation errors, price fluctuations, and payment delays, among other factors. The responses of the respondents were analyzed using a five-point Likert scale and the Relative Importance Index (RII). The findings demonstrated that the errors in cost estimating in the Bill of Quantity (BOQ) have a high degree of negative impact on the reputation and image of the participants in the projects. The greatest effect was experienced on the likelihood of obtaining future endeavors for contractors (mean value = 3.42), followed by the likelihood of obtaining new commissions by quantity surveyors (mean value = 3.40). The level of inaccuracy in costing that undershoots exposes them to risks was most serious in terms of easement of construction and effects of shortage of funds to pursue bankruptcy (hence fears of mean value = 3.78). There was also considerable financial damage as a result of cost underestimation, whereby contractors suffered the worst loss in profit (mean value = 3.88). Every expense comes with its own peculiar risk and uncertainty. Pressure on the cost of materials and every other expense attributed to the building and completion of a structure adds risks to the performance figures of a project. The greatest weight (mean importance score = 4.92) was attributed to issues like market inflation in building materials, while the second greatest weight (mean importance score = 4.76) was due to increased transportation charges. On the other hand, delays in payments arising from issues of the clients like poor availability of funds (RII=0.71) and contracting issues such as disagreements on the valuation of works done (RII=0.72) or other reasons were also found to lead to project delays and additional costs. The results affirm the importance of proper cost estimation on the health of organization finances and project risks and finishes within set time limits. As for the suggestions, it is proposed to progress on the methods of costing, engender better communications with the stakeholders, and manage the delays by way of contracting and financial control. This study enhances the existing literature on construction project management by suggesting ways to deal with adverse cost inaccuracies and availability of materials due to delays in payments which, if addressed, would greatly improve the economic performance of the construction business.

Keywords: cost estimation, construction project management, material price fluctuations, payment delays, financial impact

Procedia PDF Downloads 8
527 Articulating the Colonial Relation, a Conversation between Afropessimism and Anti-Colonialism

Authors: Thomas Compton

Abstract:

As Decolonialism becomes an important topic in Political Theory, the rupture between the colonized and the colonist relation has lost attention. Focusing on the anti-colonial activist Madhi Amel, we shall consider his attention to the permanence of the colonial relation and how it preempts Frank Wilderson’s formulation of (white) culturally necessary Anti-Black violence. Both projects draw attention away from empirical accounts of oppression, instead focusing on the structural relation which precipitates them. As Amel says that we should stop thinking of the ‘underdeveloped’ as beyond the colonial relation, Wilderson says we should stop think of the Black rights that have surpassed the role of the slave. However, Amel moves beyond his idol Althusser’s Structuralism toward a formulation of the colonial relation as source of domination. Our analysis will take a Lacanian turn in considering how this non-relation was formulated as a relation how this space of negativity became a ideological opportunity for Colonial domination. Wilderson’s work shall problematise this as we conclude with his criticisms of Structural accounts for the failure to consider how Black social death exists as more than necessity but a cite of white desire. Amel, a Lebanese activist and scholar (re)discovered by Hicham Safieddine, argues colonialism is more than the theft of land, but instead a privatization of collective property and form of investment which (re)produces the status of the capitalist in spaces ‘outside’ the market. Although Amel was a true Marxist-Leninsist, who exposited the economic determinacy of the Colonial Mode of Production, we are reading this account through Alenka Zupančič’s reformulation of the ‘invisible hand job of the market’. Amel points to the signifier ‘underdeveloped’ as buttressed on a pre-colonial epistemic break, as the Western investor (debt collector) sees the (post?) colony narcissistic image. However, the colony can never become site of class conflict, as the workers are not unified but existing between two countries. In industry, they are paid in Colonial subjectivisation, the promise of market (self)pleasure, at home, they are refugees. They are not, as Wilderson states, in the permanent social death of the slave, but they are less than the white worker. This is formulated as citizen (white), non-citizen (colonized), anti-citizen (Black/slave). Here we may also think of how indentured Indians were used as instruments of colonial violence. Wilderson’s aphorism “there is no analogy to anti-Black violence” lays bare his fundamental opposition between colonial and specifically anti-Black violence. It is not only that the debt collector, landowner, or other owners of production pleasures themselves as if their hand is invisible. The absolute negativity between colony and colonized provides a new frontier for desire, the development of a colonial mode of production. An invention inside the colonial structure that is generative of class substitution. We shall explore how Amel ignores the role of the slave but how Wilderson forecloses the history African anti-colonial.

Keywords: afropessimism, fanon, marxism, postcolonialism

Procedia PDF Downloads 154
526 Evaluation of Differential Interaction between Flavanols and Saliva Proteins by Diffusion and Precipitation Assays on Cellulose Membranes

Authors: E. Obreque-Slier, V. Contreras-Cortez, R. López-Solís

Abstract:

Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. This sensation has been closely related to the interaction and precipitation between salivary proteins and polyphenols, specifically flavanols or proanthocyanidins. In addition, the type and concentration of proanthocyanidin influences significantly the intensity of the astringency and consequently the protein/proanthocyanidin interaction. However, most of the studies are based on the interaction between saliva and highly complex polyphenols, without considering the effect of monomeric proanthoancyanidins present in different foods. The aim of this study was to evaluate the effect of different monomeric proanthocyanidins on the diffusion and precipitation of salivary proteins. Thus, solutions of catechin, epicatechin, epigallocatechin and gallocatechin (0, 2.0, 4.0, 6.0, 8.0 and 10 mg/mL) were mixed with human saliva (1: 1 v/v). After incubation for 5 min at room temperature, 15 µL aliquots of each mix were dotted on a cellulose membrane and allowed to dry spontaneously at room temperature. The membrane was fixed, rinsed and stained for proteins with Coomassie blue. After exhaustive washing in 7% acetic acid, the membrane was rinsed once in distilled water and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged, and 15-μL aliquots from each of the supernatants were dotted on a cellulose membrane. The membrane was processed for protein staining as indicated above. The blue-stained area of protein distribution corresponding to each of the extract dilution-saliva mixtures was quantified by Image J 1.45 software. Each of the assays was performed at least three times. Initially, salivary proteins display a biphasic distribution on cellulose membranes, that is, when aliquots of saliva are placed on absorbing cellulose membranes, and free diffusion of saliva is allowed to occur, a non-diffusible protein fraction becomes surrounded by highly diffusible salivary proteins. In effect, once diffusion has ended, a protein-binding dye shows an intense blue-stained roughly circular area close to the spotting site (non-diffusible fraction) (NDF) which becomes surrounded by a weaker blue-stained outer band (diffusible fraction) (DF). Likewise, the diffusion test showed that epicatechin caused the complete disappearance of DF from saliva with 2 mg/mL. Also, epigallocatechin and gallocatechin caused a similar effect with 4 mg/mL, while catechin generated the same effect at 8 mg/mL. In the precipitation test, the use of epicatechin and gallocatechin generated evident precipitates at the bottom of the Eppendorf tubes. In summary, the flavanol type differentially affects the diffusion and precipitation of saliva, which would affect the sensation of astringency perceived by consumers.

Keywords: astringency, polyphenols, tannins, tannin-protein interaction

Procedia PDF Downloads 200
525 The Effect of Metal-Organic Framework Pore Size to Hydrogen Generation of Ammonia Borane via Nanoconfinement

Authors: Jing-Yang Chung, Chi-Wei Liao, Jing Li, Bor Kae Chang, Cheng-Yu Wang

Abstract:

Chemical hydride ammonia borane (AB, NH3BH3) draws attentions to hydrogen energy researches for its high theoretical gravimetrical capacity (19.6 wt%). Nevertheless, the elevated AB decomposition temperatures (Td) and unwanted byproducts are main hurdles in practical application. It was reported that the byproducts and Td can be reduced with nanoconfinement technique, in which AB molecules are confined in porous materials, such as porous carbon, zeolite, metal-organic frameworks (MOFs), etc. Although nanoconfinement empirically shows effectiveness on hydrogen generation temperature reduction in AB, the theoretical mechanism is debatable. Low Td was reported in AB@IRMOF-1 (Zn4O(BDC)3, BDC = benzenedicarboxylate), where Zn atoms form closed metal clusters secondary building unit (SBU) with no exposed active sites. Other than nanosized hydride, it was also observed that catalyst addition facilitates AB decomposition in the composite of Li-catalyzed carbon CMK-3, MOF JUC-32-Y with exposed Y3+, etc. It is believed that nanosized AB is critical for lowering Td, while active sites eliminate byproducts. Nonetheless, some researchers claimed that it is the catalytic sites that are the critical factor to reduce Td, instead of the hydride size. The group physically ground AB with ZIF-8 (zeolitic imidazolate frameworks, (Zn(2-methylimidazolate)2)), and found similar reduced Td phenomenon, even though AB molecules were not ‘confined’ or forming nanoparticles by physical hand grinding. It shows the catalytic reaction, not nanoconfinement, leads to AB dehydrogenation promotion. In this research, we explored the possible criteria of hydrogen production temperature from nanoconfined AB in MOFs with different pore sizes and active sites. MOFs with metal SBU such as Zn (IRMOF), Zr (UiO), and Al (MIL-53), accompanying with various organic ligands (BDC and BPDC; BPDC = biphenyldicarboxylate) were modified with AB. Excess MOFs were used for AB size constrained in micropores estimated by revisiting Horvath-Kawazoe model. AB dissolved in methanol was added to MOFs crystalline with MOF pore volume to AB ratio 4:1, and the slurry was dried under vacuum to collect AB@MOF powders. With TPD-MS (temperature programmed desorption with mass spectroscopy), we observed Td was reduced with smaller MOF pores. For example, it was reduced from 100°C to 64°C when MOF micropore ~1 nm, while ~90°C with pore size up to 5 nm. The behavior of Td as a function of AB crystalline radius obeys thermodynamics when the Gibbs free energy of AB decomposition is zero, and no obvious correlation with metal type was observed. In conclusion, we discovered Td of AB is proportional to the reciprocal of MOF pore size, possibly stronger than the effect of active sites.

Keywords: ammonia borane, chemical hydride, metal-organic framework, nanoconfinement

Procedia PDF Downloads 187
524 The Effect of Regulation and Investment in Sustainable Practices on Environmental Performance and Consumer Trust: a Time Series Analysis of the Dominant Companies within the Energy Sector

Authors: Sempiga Olivier, Dominika Latusek-Jurczak

Abstract:

Climate change has allegedly been attributed to a high consumption of fossil fuels, leading to severe environmental problems. The energy sector has been among the most polluting sectors for many decades. Consequently, there is a lack of trust in several energy firms, especially those in fossil fuels and nuclear energy. A robust regulatory framework is needed, and more investment in renewable energy sources is paramount for a better environmental outcome. Given the significant environmental impact of energy production and consumption in the energy sector, sustainable marketing practices have become increasingly important. Although the latter has had the lion’s share in polluting the environment, much effort has been made recently to move away from fossil fuels and privilege renewable energy sources. How this shift would help rebuild trust in the energy industry is unclear. For the shift to have lasting effects, it may be essential that regulatory agencies examine how energy firms engage in sustainable investment. There is little empirical evidence on whether adopting regulating marketing practices and investment initiatives can help different organizations reduce their environmental impact and promote sustainable development. Little is known about how and whether the environmental value in firms goes beyond rhetoric, greenwashing and publicity to translate into economic gains and environmental performance. The study investigates how regulatory agencies can help energy firms invest sustainably and take sustainable initiatives even amid the energy crisis caused by the Russia-Ukraine conflict and how these sustainable practices relate to renewed consumer trust. Using data from Corporate Knights, the study, through time series, analyses the relationship between sustainable regulation, sustainable practices of energy firms from around the world and their relation to consumer trust and environmental performance over the past 20 years. It examines how their sustainable investment, energy, and carbon productivity relate to environmental sustainability and consumer trust. This longitudinal study provides empirical evidence of the interplay between regulation, trust and environmental performance. The research is grounded in institutional trust theory, which emphasizes the role of regulatory frameworks and organizational practices in shaping public perceptions of fairness, transparency, and legitimacy. Results show that organizations in the energy sector, supported by robust regulatory tools, can overcome the negative image of polluters and compete with other companies in the fight against climate change and global warming. However, to do so, energy firms should consider investing more in renewable energy sources and implementing sustainable strategies and practices that go beyond greenwashing to improve their environmental performance, thereby rebuilding consumer trust in the energy sector. Results allow regulatory regimes and organizations to learn why it is crucial for energy firms to invest in renewable energy sources and engage in various sustainable initiatives and practices to contribute to better environmental outcomes and higher levels of trust.

Keywords: consumer trust, energy, environmental performance, regulation, renewable energy sources, sustainable practices

Procedia PDF Downloads 9
523 Application of Response Surface Methodology to Assess the Impact of Aqueous and Particulate Phosphorous on Diazotrophic and Non-Diazotrophic Cyanobacteria Associated with Harmful Algal Blooms

Authors: Elizabeth Crafton, Donald Ott, Teresa Cutright

Abstract:

Harmful algal blooms (HABs), more notably cyanobacteria-dominated HABs, compromise water quality, jeopardize access to drinking water and are a risk to public health and safety. HABs are representative of ecosystem imbalance largely caused by environmental changes, such as eutrophication, that are associated with the globally expanding human population. Cyanobacteria-dominated HABs are anticipated to increase in frequency, magnitude, and are predicted to plague a larger geographical area as a result of climate change. The weather pattern is important as storm-driven, pulse-input of nutrients have been correlated to cyanobacteria-dominated HABs. The mobilization of aqueous and particulate nutrients and the response of the phytoplankton community is an important relationship in this complex phenomenon. This relationship is most apparent in high-impact areas of adequate sunlight, > 20ᵒC, excessive nutrients and quiescent water that corresponds to ideal growth of HABs. Typically the impact of particulate phosphorus is dismissed as an insignificant contribution; which is true for areas that are not considered high-impact. The objective of this study was to assess the impact of a simulated storm-driven, pulse-input of reactive phosphorus and the response of three different cyanobacteria assemblages (~5,000 cells/mL). The aqueous and particulate sources of phosphorus and changes in HAB were tracked weekly for 4 weeks. The first cyanobacteria composition consisted of Planktothrix sp., Microcystis sp., Aphanizomenon sp., and Anabaena sp., with 70% of the total population being non-diazotrophic and 30% being diazotrophic. The second was comprised of Anabaena sp., Planktothrix sp., and Microcystis sp., with 87% diazotrophic and 13% non-diazotrophic. The third composition has yet to be determined as these experiments are ongoing. Preliminary results suggest that both aqueous and particulate sources are contributors of total reactive phosphorus in high-impact areas. The results further highlight shifts in the cyanobacteria assemblage after the simulated pulse-input. In the controls, the reactors dosed with aqueous reactive phosphorus maintained a constant concentration for the duration of the experiment; whereas, the reactors that were dosed with aqueous reactive phosphorus and contained soil decreased from 1.73 mg/L to 0.25 mg/L of reactive phosphorus from time zero to 7 days; this was higher than the blank (0.11 mg/L). Suggesting a binding of aqueous reactive phosphorus to sediment, which is further supported by the positive correlation observed between total reactive phosphorus concentration and turbidity. The experiments are nearly completed and a full statistical analysis will be completed of the results prior to the conference.

Keywords: Anabaena, cyanobacteria, harmful algal blooms, Microcystis, phosphorous, response surface methodology

Procedia PDF Downloads 167
522 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 151
521 The Rite of Jihadification in ISIS Modified Video Games: Mass Deception and Dialectic of Religious Regression in Technological Progression

Authors: Venus Torabi

Abstract:

ISIS, the terrorist organization, modified two videogames, ARMA III and Grand Theft Auto 5 (2013) as means of online recruitment and ideological propaganda. The urge to study the mechanism at work, whether it has been successful or not, derives (Digital) Humanities experts to explore how codes of terror, Islamic ideology and recruitment strategies are incorporated into the ludic mechanics of videogames. Another aspect of the significance lies in the fact that this is a latent problem that has not been fully addressed in an interdisciplinary framework prior to this study, to the best of the researcher’s knowledge. Therefore, due to the complexity of the subject, the present paper entangles with game studies, philosophical and religious poles to form the methodology of conducting the research. As a contextualized epistemology of such exploitation of videogames, the core argument is building on the notion of “Culture Industry” proposed by Theodore W. Adorno and Max Horkheimer in Dialectic of Enlightenment (2002). This article posits that the ideological underpinnings of ISIS’s cause corroborated by the action-bound mechanics of the videogames are in line with adhering to the Islamic Eschatology as a furnishing ground and an excuse in exercising terrorism. It is an account of ISIS’s modification of the videogames, a tool of technological progression to practice online radicalization. Dialectically, this practice is packed up in rhetoric for recognizing a religious myth (the advent of a savior), as a hallmark of regression. The study puts forth that ISIS’s wreaking havoc on the world, both in reality and within action videogames, is negotiating the process of self-assertion in the players of such videogames (by assuming one’s self a member of terrorists) that leads to self-annihilation. It tries to unfold how ludic Mod videogames are misused as tools of mass deception towards ethnic cleansing in reality and line with the distorted Eschatological myth. To conclude, this study posits videogames to be a new avenue of mass deception in the framework of the Culture Industry. Yet, this emerges as a two-edged sword of mass deception in ISIS’s modification of videogames. It shows that ISIS is not only trying to hijack the minds through online/ludic recruitment, it potentially deceives the Muslim communities or those prone to radicalization into believing that it's terrorist practices are preparing the world for the advent of a religious savior based on Islamic Eschatology. This is to claim that the harsh actions of the videogames are potentially breeding minds by seeds of terrorist propaganda and numbing them to violence. The real world becomes an extension of that harsh virtual environment in a ludic/actual continuum, the extension that is contributing to the mass deception mechanism of the terrorists, in a clandestine trend.

Keywords: culture industry, dialectic, ISIS, islamic eschatology, mass deception, video games

Procedia PDF Downloads 137
520 A Comparison between Five Indices of Overweight and Their Association with Myocardial Infarction and Death, 28-Year Follow-Up of 1000 Middle-Aged Swedish Employed Men

Authors: Lennart Dimberg, Lala Joulha Ian

Abstract:

Introduction: Overweight (BMI 25-30) and obesity (BMI 30+) have consistently been associated with cardiovascular (CV) risk and death since the Framingham heart study in 1948, and BMI was included in the original Framingham risk score (FRS). Background: Myocardial infarction (MI) poses a serious threat to the patient's life. In addition to BMI, several other indices of overweight have been presented and argued to replace FRS as more relevant measures of CV risk. These indices include waist circumference (WC), waist/hip ratio (WHR), sagittal abdominal diameter (SAD), and sagittal abdominal diameter to height (SADHtR). Specific research question: The research question of this study is to evaluate the interrelationship between the various body measurements, BMI, WC, WHR, SAD, and SADHtR, and which measurement is strongly associated with MI and death. Methods: In 1993, 1,000 middle-aged Caucasian, randomly selected working men of the Swedish Volvo-Renault cohort were surveyed at a nurse-led health examination with a questionnaire, EKG, laboratory tests, blood pressure, height, weight, waist, and sagittal abdominal diameter measurements. Outcome data of myocardial infarction over 28 years come from Swedeheart (the Swedish national myocardial infarction registry) and the Swedish death registry. The Aalen-Johansen and Kaplan–Meier methods were used to estimate the cumulative incidences of MI and death. Multiple logistic regression analyses were conducted to compare BMI with the other four body measurements. The risk for the various measures of obesity was calculated with outcomes of accumulated first-time myocardial infarction and death as odds ratios (OR) in quartiles. The ORs between the 4th and the 1st quartile of each measure were calculated to estimate the association between the body measurement variables and the probability of cumulative incidences of myocardial infarction (MI) over time. Double-sided P values below 0.05 will be considered statistically significant. Unadjusted odds ratios were calculated for obesity indicators, MI, and death. Adjustments for age, diabetes, SBP, and the ratio of total cholesterol/HDL-C and blue/white collar status were performed. Results: Out of 1000 people, 959 subjects had full information about the five different body measurements. Of those, 90 participants had a first MI, and 194 persons died. The study showed that there was a high and significant correlation between the five different body measurements, and they were all associated with CVD risk factors. All body measurements were significantly associated with MI, with the highest (OR=3.6) seen for SADHtR and WC. After adjustment, all but SADHtR remained significant with weaker ORs. As for all-cause mortality, WHR (OR=1.7), SAD (OR=1.9), and SADHtR (OR=1.6) were significantly associated, but not WC and BMI. However, after adjustment, only WHR and SAD were significantly associated with death, but with attenuated ORs.

Keywords: BMI, death, epidemiology, myocardial infarction, risk factor, sagittal abdominal diameter, sagittal abdominal diameter to height, waist circumference, waist-hip ratio

Procedia PDF Downloads 96
519 Consumer Cognitive Models of Vaccine Attitudes: Behavioral Informed Strategies Promoting Vaccination Policy in Greece

Authors: Halkiopoulos Constantinos, Koutsopoulou Ioanna, Gkintoni Evgenia, Antonopoulou Hera

Abstract:

Immunization appears to be an essential part of health care service in times of pandemics such as covid-19 and aims not only to protect the health of the population but also the health and sustainability of the economies of the countries affected. It is reported that more than 3.44 billion doses have been administered so far, which accounts for 45 doses for 100 people. Vaccination programs in various countries have been promoted and accepted by people differently and therefore they proceeded in different ways and speed; most countries directing them towards people with vulnerable chronic or recent health statuses. Large scale restriction measures or lockdown, personal protection measures such as masks and gloves and a decrease in leisure and sports activities were also implemented around the world as part of the protection health strategies against the covid-19 pandemic. This research aims to present an analysis based on variations on people’s attitudes towards vaccination based on demographic, social and epidemiological characteristics, and health status on the one hand and perception of health, health satisfaction, pain, and quality of life on the other hand. 1500 Greek e-consumers participated in the research, mainly through social media who took part in an online-based survey voluntarily. The questionnaires included demographic, social and medical characteristics of the participants, and questions asking people’s willingness to be vaccinated and their opinion on whether there should be a vaccine against covid-19. Other stressor factors were also reported in the questionnaires and participants’ loss of someone close due to covid-19, or staying at home quarantine due to being infected from covid-19. WHOQUOL-BREF and GLOBAL PSYCHOTRAUMA SCREEN- GPS were used with kind permission from WHO and from the International Society for Traumatic Stress Studies in this study. Attitudes towards vaccination varied significantly related to aging, level of education, health status and consumer behavior. Health professionals’ attitudes also varied in relation to age, level of education, profession, health status and consumer needs. Vaccines have been the most common technological aid of human civilization so far in the fight against viruses. The results of this study can be used for health managers and digital marketers of pharmaceutical companies and also other staff involved in vaccination programs and for designing health policy immunization strategies during pandemics in order to achieve positive attitudes towards vaccination and larger populations being vaccinated in shorter periods of time after the break out of pandemic. Health staff needs to be trained, aided and supervised to go through with vaccination programs and to be protected through vaccination programs themselves. Feedback in each country’s vaccination program, short backs, deficiencies and delays should be addressed and worked out.

Keywords: consumer behavior, cognitive models, vaccination policy, pandemic, Covid-19, Greece

Procedia PDF Downloads 185
518 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: anammox, filter media, kinetics, nitrogen removal

Procedia PDF Downloads 382
517 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk

Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda

Abstract:

Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.

Keywords: cheese fraud, milk, pasteurisation, TD-NMR

Procedia PDF Downloads 243
516 The Surgical Trainee Perception of the Operating Room Educational Environment

Authors: Neal Rupani

Abstract:

Background: A surgical trainee has limited learning opportunities in the operating room in order to gain an ever-increasing standard of surgical skill, competency, and proficiency. These opportunities continue to decline due to numerous factors such as the European Working Time Directive and increasing requirement for service provision. It is therefore imperative to obtain the highest educational value from each educational opportunity. A measure that has yet to be validated in England on surgical trainees called the Operating Room Educational Environment Measure (OREEM) has been developed to identify and evaluate each component of the educational environment with a view to steer future change in optimising educational events in theatre. Aims: The aims of the study are to assess the reliability of the OREEM within England and to evaluate the surgical trainee’s objective perspective of the current operating room educational environment within one region within England. Methods: Using a quantitative study approach, data was collected over one month from surgical trainees within Health Education Thames Valley (Oxford) using an online questionnaire consisting of demographic data, the OREEM, a global satisfaction score. Results: 140 surgical trainees were invited to the study, with an online response of 54 participants (response rate = 38.6%). The OREEM was shown to have good internal consistency (α = 0.906, variables = 40) and unidimensionality, along with all four of its subgroups. The mean OREEM score was 79.16%. The areas highlighted for improvement predominantly focused on improving learning opportunities (average subscale score = 72.9%) and conducting pre- and post-operative teaching (average score = 70.4%). The trainee perception is most satisfactory for the level of supervision and workload (average subscale score = 82.87%). There was no differences found between gender (U = 191.5, p = 0.535) or type of hospital (U = 258.0, p = 0.099), but the learning environment was favoured towards senior trainees (U = 223.5, p = 0.017). There was strong correlation between OREEM and the global satisfaction score (r = 0.755, p<0.001). Conclusions: The OREEM was shown to be reliable in measuring the educational environment in the operating room. This can be used to identify potentially modifiable components for improvement and as an audit tool to ensure high standards are being met. The current perception of the education environment in Health Education Thames Valley is satisfactory, and modifiable internal and external factors such as reducing service provision requirements, empowering trainees to plan lists, creating a team-working ethic between all personnel, and using tools that maximise learning from each operation have been identified to improve learning in the future. There is a favourable attitude to use of such improvement tools, especially for those currently dissatisfied.

Keywords: education environment, surgery, post-graduate education, OREEM

Procedia PDF Downloads 184
515 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 101
514 Relationship between Different Heart Rate Control Levels and Risk of Heart Failure Rehospitalization in Patients with Persistent Atrial Fibrillation: A Retrospective Cohort Study

Authors: Yongrong Liu, Xin Tang

Abstract:

Background: Persistent atrial fibrillation is a common arrhythmia closely related to heart failure. Heart rate control is an essential strategy for treating persistent atrial fibrillation. Still, the understanding of the relationship between different heart rate control levels and the risk of heart failure rehospitalization is limited. Objective: The objective of the study is to determine the relationship between different levels of heart rate control in patients with persistent atrial fibrillation and the risk of readmission for heart failure. Methods: We conducted a retrospective dual-centre cohort study, collecting data from patients with persistent atrial fibrillation who received outpatient treatment at two tertiary hospitals in central and western China from March 2019 to March 2020. The collected data included age, gender, body mass index (BMI), medical history, and hospitalization frequency due to heart failure. Patients were divided into three groups based on their heart rate control levels: Group I with a resting heart rate of less than 80 beats per minute, Group II with a resting heart rate between 80 and 100 beats per minute, and Group III with a resting heart rate greater than 100 beats per minute. The readmission rates due to heart failure within one year after discharge were statistically analyzed using propensity score matching in a 1:1 ratio. Differences in readmission rates among the different groups were compared using one-way ANOVA. The impact of varying levels of heart rate control on the risk of readmission for heart failure was assessed using the Cox proportional hazards model. Binary logistic regression analysis was employed to control for potential confounding factors. Results: We enrolled a total of 1136 patients with persistent atrial fibrillation. The results of the one-way ANOVA showed that there were differences in readmission rates among groups exposed to different levels of heart rate control. The readmission rates due to heart failure for each group were as follows: Group I (n=432): 31 (7.17%); Group II (n=387): 11.11%; Group III (n=317): 90 (28.50%) (F=54.3, P<0.001). After performing 1:1 propensity score matching for the different groups, 223 pairs were obtained. Analysis using the Cox proportional hazards model showed that compared to Group I, the risk of readmission for Group II was 1.372 (95% CI: 1.125-1.682, P<0.001), and for Group III was 2.053 (95% CI: 1.006-5.437, P<0.001). Furthermore, binary logistic regression analysis, including variables such as digoxin, hypertension, smoking, coronary heart disease, and chronic obstructive pulmonary disease as independent variables, revealed that coronary heart disease and COPD also had a significant impact on readmission due to heart failure (p<0.001). Conclusion: The correlation between the heart rate control level of patients with persistent atrial fibrillation and the risk of heart failure rehospitalization is positive. Reasonable heart rate control may significantly reduce the risk of heart failure rehospitalization.

Keywords: heart rate control levels, heart failure rehospitalization, persistent atrial fibrillation, retrospective cohort study

Procedia PDF Downloads 74
513 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates

Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau

Abstract:

There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.

Keywords: duration estimates, long durations, passage of time judgements, task demands

Procedia PDF Downloads 130
512 Nanomechanical Characterization of Healthy and Tumor Lung Tissues at Cell and Extracellular Matrix Level

Authors: Valeria Panzetta, Ida Musella, Sabato Fusco, Paolo Antonio Netti

Abstract:

The study of the biophysics of living cells drew attention to the pivotal role of the cytoskeleton in many cell functions, such as mechanics, adhesion, proliferation, migration, differentiation and neoplastic transformation. In particular, during the complex process of malignant transformation and invasion cell cytoskeleton devolves from a rigid and organized structure to a more compliant state, which confers to the cancer cells a great ability to migrate and adapt to the extracellular environment. In order to better understand the malignant transformation process from a mechanical point of view, it is necessary to evaluate the direct crosstalk between the cells and their surrounding extracellular matrix (ECM) in a context which is close to in vivo conditions. In this study, human biopsy tissues of lung adenocarcinoma were analyzed in order to define their mechanical phenotype at cell and ECM level, by using particle tracking microrheology (PTM) technique. Polystyrene beads (500 nm) were introduced into the sample slice. The motion of beads was obtained by tracking their displacements across cell cytoskeleton and ECM structures and mean squared displacements (MSDs) were calculated from bead trajectories. It has been already demonstrated that the amplitude of MSD is inversely related to the mechanical properties of intracellular and extracellular microenvironment. For this reason, MSDs of particles introduced in cytoplasm and ECM of healthy and tumor tissues were compared. PTM analyses showed that cancerous transformation compromises mechanical integrity of cells and extracellular matrix. In particular, the MSD amplitudes in cells of adenocarcinoma were greater as compared to cells of normal tissues. The increased motion is probably associated to a less structured cytoskeleton and consequently to an increase of deformability of cells. Further, cancer transformation is also accompanied by extracellular matrix stiffening, as confirmed by the decrease of MSDs of matrix in tumor tissue, a process that promotes tumor proliferation and invasiveness, by activating typical oncogenic signaling pathways. In addition, a clear correlation between MSDs of cells and tumor grade was found. MSDs increase when tumor grade passes from 2 to 3, indicating that cells undergo to a trans-differentiation process during tumor progression. ECM stiffening is not dependent on tumor grade, but the tumor stage resulted to be strictly correlated with both cells and ECM mechanical properties. In fact, a greater stage is assigned to tumor spread to regional lymph nodes and characterized by an up-regulation of different ECM proteins, such as collagen I fibers. These results indicate that PTM can be used to get nanomechanical characterization at different scale levels in an interpretative and diagnostic context.

Keywords: cytoskeleton, extracellular matrix, mechanical properties, particle tracking microrheology, tumor

Procedia PDF Downloads 280
511 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 323
510 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 283
509 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 75
508 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 39