Search results for: applied mathematics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8821

Search results for: applied mathematics

1051 Corporate Governance Disclosures by South African Auditing Firms

Authors: Rozanne Janet Smith

Abstract:

This article examined the corporate governance disclosures of the large and medium-sized auditing firms in South Africa. It is important that auditing firms disclose their practice of good corporate governance to the public, as they serve the public interest. The auditing profession has been criticized due to many corporate scandals in recent years. This has undermined the reputation of the profession, with experts and the public questioning whether auditing firms have corporate governance structures in place, and whether they are taking public interest into consideration. In South Africa there is no corporate governance code specifically for audit firms. Auditing firms are encouraged by IRBA to issue a transparency report in which they disclose corporate governance structures and application, but this is not compulsory in South Africa. Moreover, the information issued in these transparency reports is limited and often only focuses on audit quality, and not governance. Through a literature review it was found that the UK is one of only a few countries who has a corporate governance code for audit firms. As South Africa initially used the UK Cadbury report to develop the King IV Code, it was fitting to use the UK Audit Firm Governance Code as a benchmark to determine if audit firms in South Africa are disclosing relevant corporate governance information in their transparency reports and/or integrated reports. This study contributes to the existing body of knowledge by pursuing the following objective: To determine the improvement in the corporate governance disclosures of large and medium-sized auditing firms in South Africa through comparative research. Available data from 2019 will be used and compared to the disclosures in the 2023/2024 transparency and or integrated reports of the large and medium-sized auditing firms in South Africa. To achieve this objective a constructivist research paradigm was applied. Qualitative secondary information was gathered for the analysis. A content analysis was selected to collect the qualitative data by analyzing the integrated reports and/or transparency reports of large and medium-sized auditing firms with 20 or more partners and to determine what is disclosed on their corporate governance practices. These transparency reports and integrated reports were then read and analyzed in depth and compared to the principles stated in the UK Code. Since there are only nine medium-sized and large auditing firms in South Africa, the researcher was able to conduct the content analysis by reading each report in depth. The following six principles which are found in the UK Code were assessed for disclosure. (1) Leadership, (2) Values, (3) INED, (4) Operations, (5) Reporting, and (6) Dialogue. The results reveal that the auditing firms are not disclosing the corporate governance principles and practices to the necessary extent. Although there has been some improvement, the disclosure is not to the extent which it should be. There is still a need for a South African audit firm governance code.

Keywords: auditing firms, corporate governance, South Africa, disclosure

Procedia PDF Downloads 23
1050 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 155
1049 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS

Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan

Abstract:

Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.

Keywords: bearing force, frictional force, finite element analysis, ANSYS

Procedia PDF Downloads 334
1048 ‘Transnationalism and the Temporality of Naturalized Citizenship

Authors: Edward Shizha

Abstract:

Citizenship is not only political, but it is also a socio-cultural expectation that naturalized immigrants desire for. However, the outcomes of citizenship desirability are determined by forces outside the individual’s control based on legislation and laws that are designed at the macro and exosystemic levels by politicians and policy makers. These laws are then applied to determine the status (permanency or temporariness) of citizenship for immigrants and refugees, but the same laws do not apply to non-immigrant citizens who attain it by birth. While theoretically, citizenship has generally been considered an irrevocable legal status and the highest and most secure legal status one can hold in a state, it is not inviolate for immigrants. While Article 8 of the United Nations Convention on the Reduction of Statelessness provides grounds for revocation of citizenship obtained by immigrants and refugees in host countries, nation-states have their own laws tied to the convention that provide grounds for revocation. Ever since the 9/11 attacks in the USA, there has been a rise in conditional citizenship and the state’s withdrawal of citizenship through revocation laws that denaturalize citizens who end up not merely losing their citizenship but also the right to reside in the country of immigration. Because immigrants can be perceived as a security threat, the securitization of citizenship and the legislative changes have been adopted to specifically allow greater discretionary power in stripping people of their citizenship.The paper ‘Do We Really Belong Here?’ Transnationalism and the Temporality of Naturalized Citizenship examines literature on the temporality of naturalized citizenship and questions whether citizenship, for newcomers (immigrants and refugees), is a protected human right or a privilege. The paper argues that citizenship in a host country is a well sought-after status by newcomers. The question is whether their citizenship, if granted, has a permanent or temporary status and whether it is treated in the same way as that of non-immigrant citizens. The paper further argues that, despite citizenship having generally been considered an irrevocable status in most Western countries, in practice, if not in law, for immigrants and refugees, citizenship comes with strings attached because of policies and laws that control naturalized citizenship. These laws can be used to denationalize naturalized citizens through revocations for those stigmatized as ‘undesirables’ who are threatened with deportation. Whereas non-immigrant citizens (those who attain it by birth) have absolute right to their citizenship, this is seldom the case for immigrants.This paper takes a multidisciplinary approach using Urie Bronfenbrenner’s ecological systems theory, the macrosystem and exo-system, to examine and review literature on the temporality of naturalized citizenship and questions whether citizenship is a protected right or a privilege for immigrants. The paper challenges the human rights violation of citizenship revocation and argues for equality of treatment for all citizens despite how they acquired their citizenship. The fragility of naturalized citizenship undermines the basic rights and securities that citizenship status can provide to the person as an inclusive practice in a diverse society.

Keywords: citizenship, citizenship revocation, dual citizenship, human rights, naturalization, naturalized citizenship

Procedia PDF Downloads 75
1047 Fake news and Conspiracy Narratives in the Covid-19 Crisis: An International Comparison

Authors: Caja Thimm

Abstract:

Already well before the Corona pandemic hit the world, ‘fake news‘ were no longer regarded as harmless twists of the truth but as intentionally composed disinformation, often with the goal of manipulative populist propaganda. During the Corona crisis, particularly conspiracy narratives have become a worldwide phenomenon with dangerous consequences (anti vaccination myths). The success of these manipulated news need s to be counteracted by trustworthy news, which in Europe particularly includes public broadcasting media and their social media channels. To understand better how the main public broadcasters in Germany, the UK, and France used Instagram strategically, a comparative study was carried out. The study – comparative analysis of Instagram during the Corona Crisis In our empirical study, we compared the activities by selected formats during the Corona crisis in order to see how the public broadcasters reached their audiences and how this might, in the longer run, affect journalistic strategies on social media platforms. First analysis showed that the increase in the use of social media overall was striking. Almost one in two adult online users (48 %) obtained information about the virus in social media, and in total, 38% of the younger age group (18-24) looked for Covid19 information on Instagram, so the platform can be regarded as one of the central digital spaces for Corona related information searches. Quantitative measures showed that 47% of recent posts by the broadcasters were related to Corona, and 7% treated conspiracy myths. For the more detailed content analysis, the following categories of analysis were applied: • Digital storytelling and instastories • Textuality and semantic keys • links to information • stickers • videochat • fact checking • news ticker • service • infografics and animated tables Additionally to these basic features, we particularly looked for new formats created during the crisis. Journalistic use of social media platforms opens up immediate and creative ways of applying the media logics of the respective platforms, and particularly the BBC and ARD formats proved to be interactive, responsive, and entertaining. Among them were new formats such as a space for user questions and personal uploads, interviews, music, comedy, etc. Particularly the fact checking channel got a lot of attention, as many user questions were focused on the conspiracy theories, which dominated the public discourse during many weeks in 2020. In the presentation, we will introduce eight particular strategies that show how public broadcasting journalism can adopt digital platforms and use them creatively and, hence help to counteract against conspiracy narratives and fake news.

Keywords: fake news, social media, digital journalism, digital methods

Procedia PDF Downloads 156
1046 Floods Hazards and Emergency Respond in Negara Brunei Darussalam

Authors: Hj Mohd Sidek bin Hj Mohd Yusof

Abstract:

More than 1.5 billion people around the world are adversely affected by floods. Floods account for about a third of all natural catastrophes, cause more than half of all fatalities and are responsible for a third of overall economic loss around the world. Giving advanced warning of impending disasters can reduce or even avoid the number of deaths, social and economic hardships that are so commonly reported after the event. Integrated catchment management recognizes that it is not practical or viable to provide structural measures that will keep floodwater away from the community and their property. Non-structural measures are therefore required to assist the community to cope when flooding occurs which exceeds the capacity of the structural measures. Non-structural measures may need to be used to influence the way land is used or buildings are constructed, or they may be used to improve the community’s preparedness and response to flooding. The development and implementation of non-structural measures may be guided and encouraged by policy and legislation, or through voluntary action by the community based on knowledge gained from public education programs. There is a range of non-structural measures that can be used for flood hazard mitigation which can be the use measures includes policies and rules applied by government to regulate the kinds of activities that are carried out in various flood-prone areas, including minimum floor levels and the type of development approved. Voluntary actions taken by the authorities and by the community living and working on the flood plain to lessen flooding effects on themselves and their properties including monitoring land use changes, monitoring and investigating the effects of bush / forest clearing in the catchment and providing relevant flood related information to the community. Response modification measures may include: flood warning system, flood education, community awareness and readiness, evacuation arrangements and recovery plan. A Civil Defense Emergency Management needs to be established for Brunei Darussalam in order to plan, co-ordinate and undertake flood emergency management. This responsibility may be taken by the Ministry of Home Affairs, Brunei Darussalam who is already responsible for Fire Fighting and Rescue services. Several pieces of legislation and planning instruments are in place to assist flood management, particularly: flood warning system, flood education Community awareness and readiness, evacuation arrangements and recovery plan.

Keywords: RTB, radio television brunei, DDMC, district disaster management center, FIR, flood incidence report, PWD, public works department

Procedia PDF Downloads 256
1045 A Matched Case-Control Study to Asses the Association of Chikunguynya Severity among Blood Groups and Other Determinants in Tesseney, Gash Barka Zone, Eritrea

Authors: Ghirmay Teklemicheal, Samsom Mehari, Sara Tesfay

Abstract:

Objectives: A total of 1074 suspected chikungunya cases were reported in Tesseney Province, Gash Barka region, Eritrea, during an outbreak. This study was aimed to assess the possible association of chikungunya severity among ABO blood groups and other potential determinants. Methods: A sex-matched and age-matched case-control study was conducted during the outbreak. For each case, one control subject had been selected from the mild Chikungunya cases. Along the same line of argument, a second control subject had also been designated through which neighborhood of cases were analyzed, scrutinized, and appeared to the scheme of comparison. Time is always the most sacrosanct element in pursuance of any study. According to the temporal calculation, this study was pursued from October 15, 2018, to November 15, 2018. Coming to the methodological dependability, calculating odds ratios (ORs) and conditional (fixed-effect) logistic regression methods were being applied. As a consequence of this, the data was analyzed and construed on the basis of the aforementioned methodological systems. Results: In this outbreak, 137 severe suspected chikungunya cases and 137 mild chikungunya suspected patients, and 137 controls free of chikungunya from the neighborhood of cases were analyzed. Non-O individuals compared to those with O blood group indicated as significant with a p-value of 0.002. Separate blood group comparison among A and O blood groups reflected as significant with a p-value of 0.002. However, there was no significant difference in the severity of chikungunya among B, AB, and O blood groups with a p-value of 0.113 and 0.708, respectively, and a strong association of chikungunya severity was found with hypertension and diabetes (p-value of < 0.0001); whereas, there was no association between chikungunya severity and asthma with a p-value of 0.695 and also no association with pregnancy (p-value =0.881), ventilator (p-value =0.181), air conditioner (p-value = 0.247), and didn’t use latrine and pit latrine (p-value = 0.318), among individuals using septic and pit latrine (p-value = 0.567) and also among individuals using flush and pit latrine (p-value = 0.194). Conclusions: Non- O blood groups were found to be at risk more than their counterpart O blood group individuals with severe form of chikungunya disease. By the same token, individuals with chronic disease were more prone to severe forms of the disease in comparison with individuals without chronic disease. Prioritization is recommended for patients with chronic diseases and non-O blood group since they are found to be susceptible to severe chikungunya disease. Identification of human cell surface receptor(s) for CHIKV is quite necessary for further understanding of its pathophysiology in humans. Therefore, molecular and functional studies will necessarily be helpful in disclosing the association of blood group antigens and CHIKV infections.

Keywords: Chikungunya, Chikungunya virus, disease outbreaks, case-control studies, Eritrea

Procedia PDF Downloads 164
1044 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico

Authors: Gustavo Cruz-Bello

Abstract:

Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.

Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability

Procedia PDF Downloads 113
1043 Geographical Information System and Multi-Criteria Based Approach to Locate Suitable Sites for Industries to Minimize Agriculture Land Use Changes in Bangladesh

Authors: Nazia Muhsin, Tofael Ahamed, Ryozo Noguchi, Tomohiro Takigawa

Abstract:

One of the most challenging issues to achieve sustainable development on food security is land use changes. The crisis of lands for agricultural production mainly arises from the unplanned transformation of agricultural lands to infrastructure development i.e. urbanization and industrialization. Land use without sustainability assessment could have impact on the food security and environmental protections. Bangladesh, as the densely populated country with limited arable lands is now facing challenges to meet sustainable food security. Agricultural lands are using for economic growth by establishing industries. The industries are spreading from urban areas to the suburban areas and using the agricultural lands. To minimize the agricultural land losses for unplanned industrialization, compact economic zones should be find out in a scientific approach. Therefore, the purpose of the study was to find out suitable sites for industrial growth by land suitability analysis (LSA) by using Geographical Information System (GIS) and multi-criteria analysis (MCA). The goal of the study was to emphases both agricultural lands and industries for sustainable development in land use. The study also attempted to analysis the agricultural land use changes in a suburban area by statistical data of agricultural lands and primary data of the existing industries of the study place. The criteria were selected as proximity to major roads, and proximity to local roads, distant to rivers, waterbodies, settlements, flood-flow zones, agricultural lands for the LSA. The spatial dataset for the criteria were collected from the respective departments of Bangladesh. In addition, the elevation spatial dataset were used from the SRTM (Shuttle Radar Topography Mission) data source. The criteria were further analyzed with factors and constraints in ArcGIS®. Expert’s opinion were applied for weighting the criteria according to the analytical hierarchy process (AHP), a multi-criteria technique. The decision rule was set by using ‘weighted overlay’ tool to aggregate the factors and constraints with the weights of the criteria. The LSA found only 5% of land was most suitable for industrial sites and few compact lands for industrial zones. The developed LSA are expected to help policy makers of land use and urban developers to ensure the sustainability of land uses and agricultural production.

Keywords: AHP (analytical hierarchy process), GIS (geographic information system), LSA (land suitability analysis), MCA (multi-criteria analysis)

Procedia PDF Downloads 263
1042 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 245
1041 Effect of Laser Ablation OTR Films and High Concentration Carbon Dioxide for Maintaining the Freshness of Strawberry ‘Maehyang’ for Export in Modified Atmosphere Condition

Authors: Hyuk Sung Yoon, In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang

Abstract:

This study was conducted to improve storability by using suitable laser ablation oxygen transmission rate (OTR) films and effectiveness of high carbon dioxide at strawberry 'Maehyang' for export. Strawberries were grown by hydroponic system in Gyeongsangnam-do province. These strawberries were packed by different laser ablation OTR films (Daeryung Co., Ltd.) such as 1,300 cc, 20,000 cc, 40,000 cc, 80,000 cc, and 100,000 cc•m-2•day•atm. And CO2 injection (30%) treatment was used 20,000 cc•m-2•day•atm OTR film and perforated film was as a control. Temperature conditions were applied simulated shipping and distribution conditions from Korea to Singapore, there were stored at 3 ℃ (13 days), 10 ℃ (an hour), and 8 ℃ (7 days) for 20 days. Fresh weight loss rate was under 1% as maximum permissible weight loss in treated OTR films except perforated film as a control during storage. Carbon dioxide concentration within a package for the storage period showed a lower value than the maximum CO2 concentration tolerated range (15 %) in treated OTR films and even the concentration of high OTR film treatment; from 20,000cc to 100,000cc were less than 3%. 1,300 cc had a suitable carbon dioxide range as over 5 % under 15 % at 5 days after storage until finished experiments and CO2 injection treatment was quickly drop the 15 % at storage after 1 day, but it kept around 15 % during storage. Oxygen concentration was maintained between 10 to 15 % in 1,300 cc and CO2 injection treatments, but other treatments were kept in 19 to 21 %. Ethylene concentration was showed very higher concentration at the CO2 injection treatment than OTR treatments. In the OTR treatments, 1,300 cc showed the highest concentration in ethylene and 20,000 cc film had lowest. Firmness was maintained highest in 1,300cc, but there was not shown any significant differences among other OTR treatments. Visual quality had shown the best result in 20,000 cc that showed marketable quality until 20 days after storage. 20,000 cc and perforated film had better than other treatments in off-odor and the 1,300 cc and CO2 injection treatments have occurred strong off-odor even after 10 minutes. As a result of the difference between Hunter ‘L’ and ‘a’ values of chroma meter, the 1,300cc and CO2 injection treatments were delayed color developments and other treatments did not shown any significant differences. The results indicate that effectiveness for maintaining the freshness was best achieved at 20,000 cc•m-2•day•atm. Although 1,300 cc and CO2 injection treatments were in appropriate MA condition, it showed darkening of strawberry calyx and excessive reduction of coloring due to high carbon dioxide concentration during storage. While 1,300cc and CO2 injection treatments were considered as appropriate treatments for exports to Singapore, but the result was shown different. These results are based on cultivar characteristics of strawberry 'Maehyang'.

Keywords: carbon dioxide, firmness, shelf-life, visual quality

Procedia PDF Downloads 399
1040 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging

Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.

Keywords: breast, machine learning, MRI, radiomics

Procedia PDF Downloads 267
1039 Self-Assembling Layered Double Hydroxide Nanosheets on β-FeOOH Nanorods for Reducing Fire Hazards of Epoxy Resin

Authors: Wei Wang, Yuan Hu

Abstract:

Epoxy resins (EP), one of the most important thermosetting polymers, is widely applied in various fields due to its desirable properties, such as excellent electrical insulation, low shrinkage, outstanding mechanical stiffness, satisfactory adhesion and solvent resistance. However, like most of the polymeric materials, EP has the fatal drawbacks including inherent flammability and high yield of toxic smoke, which restricts its application in the fields requiring fire safety. So, it is still a challenge and an interesting subject to develop new flame retardants which can not only remarkably improve the flame retardancy, but also render modified resins low toxic gases generation. In recent work, polymer nanocomposites based on nanohybrids that contain two or more kinds of nanofillers have drawn intensive interest, which can realize performance enhancements. The realization of previous hybrids of carbon nanotubes (CNTs) and molybdenum disulfide provides us a novel route to decorate layered double hydroxide (LDH) nanosheets on the surface of β-FeOOH nanorods; the deposited LDH nanosheets can fill the network and promote the work efficiency of β-FeOOH nanorods. Moreover, the synergistic effects between LDH and β-FeOOH can be anticipated to have potential applications in reducing fire hazards of EP composites for the combination of condense-phase and gas-phase mechanism. As reported, β-FeOOH nanorods can act as a core to prepare hybrid nanostructures combining with other nanoparticles through electrostatic attraction through layer-by-layer assembly technique. In this work, LDH nanosheets wrapped β-FeOOH nanorods (LDH-β-FeOOH) hybrids was synthesized by a facile method, with the purpose of combining the characteristics of one dimension (1D) and two dimension (2D), to improve the fire resistance of epoxy resin. The hybrids showed a well dispersion in EP matrix and had no obvious aggregation. Thermogravimetric analysis and cone calorimeter tests confirmed that LDH-β-FeOOH hybrids into EP matrix with a loading of 3% could obviously improve the fire safety of EP composites. The plausible flame retardancy mechanism was explored by thermogravimetric infrared (TG-IR) and X-ray photoelectron spectroscopy. The reasons were concluded: condense-phase and gas-phase. Nanofillers were transferred to the surface of matrix during combustion, which could not only shield EP matrix from external radiation and heat feedback from the fire zone, but also efficiently retard transport of oxygen and flammable pyrolysis.

Keywords: fire hazards, toxic gases, self-assembly, epoxy

Procedia PDF Downloads 173
1038 The Effects of Geographical and Functional Diversity of Collaborators on Quality of Knowledge Generated

Authors: Ajay Das, Sandip Basu

Abstract:

Introduction: There is increasing recognition that diverse streams of knowledge can often be recombined in novel ways to generate new knowledge. However, knowledge recombination theory has not been applied to examine the effects of collaborator diversity on the quality of knowledge such collaborators produce. This is surprising because one would expect that a collaborative team with certain aspects of diversity should be able to recombine process elements related to knowledge development, which are relatively tacit, but also complementary because of the collaborator’s varying backgrounds. Theory and Hypotheses: We propose to examine two aspects of diversity in the environments of collaborative teams to try and capture such potential recombinations of relatively tacit, process knowledge. The first aspect of diversity in team members’ environments is geographical. Collaborators with more geographical distance between them (perhaps working in different countries) often have more autonomy in the processes they adopt for knowledge development. In the absence of overt monitoring, such collaborators are likely to adopt differing approaches to knowledge development. The sharing of such varying approaches among collaborators is likely to result in greater quality of the common collaborative pursuit. The second aspect is diversity in the work backgrounds of team members. Such diversity can also increase the potential for knowledge recombination. For example, if one or more members are from a manufacturing center (versus all of them being from a purely R&D center), such members will provide unique perspectives on the implementation of innovative ideas. Again, knowledge that has been evaluated from these diverse perspectives is likely to be of a higher quality. In addition to the above aspects of environmental diversity among team members, we also plan to examine the extent to which individual collaborators are in different environments from the primary innovation center of their employing firms. Proposed Methods: We will test our model on a sample of firms in the semiconductor industry. Our level of analysis will be individual patents generated by these firms and the teams involved in the generation of these. Information on manufacturing activities of our sample firms will be obtained from SEMI, a proprietary database of the semiconductor industry, as well as company 10-K reports. Conclusion: We believe that our results will represent a preliminary attempt to understand how various forms of diversity in collaborative teams impact the knowledge development process. Our dependent variable of knowledge quality is important to study since higher values of this variable can not only drive firm performance but the broader development of regions and societies through spillover impacts on future innovation. The results of this study will, therefore, inform future research and practice in innovation, geographical location, and vertical integration.

Keywords: innovation, manufacturing strategy, knowledge, diversity

Procedia PDF Downloads 352
1037 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels

Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi

Abstract:

Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.

Keywords: glutamine, resistance traning, immuglobulins, cortisol

Procedia PDF Downloads 479
1036 Functionalization of Carbon-Coated Iron Nanoparticles with Fluorescent Protein

Authors: A. G. Pershina, P. S. Postnikov, M. E. Trusova, D. O. Burlakova, A. E. Sazonov

Abstract:

Invention of magnetic-fluorescent nanocomposites is a rapidly developing area of research. The magnetic-fluorescent nanocomposite attractiveness is connected with the ability of simultaneous management and control of such nanocomposites by two independent methods based on different physical principles. These nanocomposites are applied for the solution of various essential scientific and experimental biomedical problems. The aim of this research is development of principle approach to nanobiohybrid structures with magnetic and fluorescent properties design. The surface of carbon-coated iron nanoparticles (Fe@C) were covalently modified by 4-carboxy benzenediazonium tosylate. Recombinant fluorescent protein TagGFP2 (Eurogen) was obtained in E. coli (Rosetta DE3) by standard laboratory techniques. Immobilization of TagGFP2 on the nanoparticles surface was provided by the carbodiimide activation. The amount of COOH-groups on the nanoparticle surface was estimated by elemental analysis (Elementar Vario Macro) and TGA-analysis (SDT Q600, TA Instruments. Obtained nanocomposites were analyzed by FTIR spectroscopy (Nicolet Thermo 5700) and fluorescence microscopy (AxioImager M1, Carl Zeiss). Amount of the protein immobilized on the modified nanoparticle surface was determined by fluorimetry (Cary Eclipse) and spectrophotometry (Unico 2800) with the help of preliminary obtained calibration plots. In the FTIR spectra of modified nanoparticles the adsorption band of –COOH group around 1700 cm-1 and bands in the region of 450-850 cm-1 caused by bending vibrations of benzene ring were observed. The calculated quantity of active groups on the surface was equal to 0,1 mmol/g of material. The carbodiimide activation of COOH-groups on nanoparticles surface results to covalent immobilization of TagGFP2 fluorescent protein (0.2 nmol/mg). The success of immobilization was proved by FTIR spectroscopy. Protein characteristic adsorption bands in the region of 1500-1600 cm-1 (amide I) were presented in the FTIR spectrum of nanocomposite. The fluorescence microscopy analysis shows that Fe@C-TagGFP2 nanocomposite possesses fluorescence properties. This fact confirms that TagGFP2 protein retains its conformation due to immobilization on nanoparticles surface. Magnetic-fluorescent nanocomposite was obtained as a result of unique design solution implementation – the fluorescent protein molecules were fixed to the surface of superparamagnetic carbon-coated iron nanoparticles using original diazonium salts.

Keywords: carbon-coated iron nanoparticles, diazonium salts, fluorescent protein, immobilization

Procedia PDF Downloads 342
1035 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 79
1034 3d Gis Participatory Mapping And Conflict Ladm: Comparative Analysis Of Land Policies And Survey Procedures Applied By The Igorots, Ncip, And Denr To Itogon Ancestral Domain Boundaries

Authors: Deniz A. Apostol, Denyl A. Apostol, Oliver T. Macapinlac, George S. Katigbak

Abstract:

Ang lupa ay buhay at ang buhay ay lupa (land is life and life is land). Based on the 2015 census, the Indigenous Peoples (IPs) population in the Philippines is estimated to be 11.3-20.2 million. They hail from various regions, possess distinct cultures, but encounter shared struggles in territorial disputes. Itogon, the largest Benguet municipality, is home to the Ibaloi, Kankanaey, and other Igorot tribes. Despite having three (3) Ancestral Domains (ADs), Itogon is predominantly labeled as timberland or forest. These overlapping land classifications highlight the presence of inconsistencies in national laws and jurisdictions. This study aims to analyze surveying procedures used by the Igorots, NCIP, and DENR in mapping the Itogon AD Boundaries, show land boundary delineation conflicts, propose surveying guidelines, and recommend 3D Participatory Mapping as geomatics solution for updated AD reference maps. Interpretative Phenomenological Analysis (IPA), Comparative Legal Analysis (CLA), and Map Overlay Analysis (MOA) were utilized to examine the interviews, compare land policies and surveying procedures, and identify differences and overlaps in conflicting land boundaries. In the IPA, master themes identified were AD Definition (rights, responsibilities, restrictions), AD Overlaps (land classifications, political boundaries, ancestral domains, land laws/policies), and Other Conflicts (with other agencies, misinterpretations, suggestions), as considerations for mapping ADs. CLA focused on conflicting surveying procedures: AD Definitions, Surveying Equipment, Surveying Methods, Map Projections, Order of Accuracy, Monuments, Survey Parties, Pre-survey, Survey Proper, and Post-survey procedures. MOA emphasized the land area percentage of conflicting areas, showcasing the impact of misaligned surveying procedures. The findings are summarized through a Land Administration Domain Model (LADM) Conflict, for AD versus AD and Political Boundaries. The products of this study are identification of land conflict factors, survey guidelines recommendations, and contested land area computations. These can serve as references for revising survey manuals, updating AD Sustainable Development and Protection Plans, and making amendments to laws.

Keywords: ancestral domain, gis, indigenous people, land policies, participatory mapping, surveying, survey procedures

Procedia PDF Downloads 93
1033 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization

Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder

Abstract:

In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.

Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening

Procedia PDF Downloads 301
1032 Formulation and Invivo Evaluation of Salmeterol Xinafoate Loaded MDI for Asthma Using Response Surface Methodology

Authors: Paresh Patel, Priya Patel, Vaidehi Sorathiya, Navin Sheth

Abstract:

The aim of present work was to fabricate Salmeterol Xinafoate (SX) metered dose inhaler (MDI) for asthma and to evaluate the SX loaded solid lipid nanoparticles (SLNs) for pulmonary delivery. Solid lipid nanoparticles can be used to deliver particles to the lungs via MDI. A modified solvent emulsification diffusion technique was used to prepare Salmeterol Xinafoate loaded solid lipid nanoparticles by using compritol 888 ATO as lipid, tween 80 as surfactant, D-mannitol as cryoprotecting agent and L-leucine was used to improve aerosolization behaviour. Box-Behnken design was applied with 17 runs. 3-D surface response plots and contour plots were drawn and optimized formulation was selected based on minimum particle size and maximum % EE. % yield, in vitro diffusion study, scanning electron microscopy, X-ray diffraction, DSC, FTIR also characterized. Particle size, zeta potential analyzed by Zetatrac particle size analyzer and aerodynamic properties was carried out by cascade impactor. Pre convulsion time was examined for control group, treatment group and compare with marketed group. MDI was evaluated for leakage test, flammability test, spray test and content per puff. By experimental design, particle size and % EE found to be in range between 119-337 nm and 62.04-76.77% by solvent emulsification diffusion technique. Morphologically, particles have spherical shape and uniform distribution. DSC & FTIR study showed that no interaction between drug and excipients. Zeta potential shows good stability of SLNs. % respirable fraction found to be 52.78% indicating reach to the deep part of lung such as alveoli. Animal study showed that fabricated MDI protect the lungs against histamine induced bronchospasm in guinea pigs. MDI showed sphericity of particle in spray pattern, 96.34% content per puff and non-flammable. SLNs prepared by Solvent emulsification diffusion technique provide desirable size for deposition into the alveoli. This delivery platform opens up a wide range of treatment application of pulmonary disease like asthma via solid lipid nanoparticles.

Keywords: salmeterol xinafoate, solid lipid nanoparticles, box-behnken design, solvent emulsification diffusion technique, pulmonary delivery

Procedia PDF Downloads 451
1031 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale

Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize

Abstract:

Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.

Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy

Procedia PDF Downloads 99
1030 Remote Criminal Proceedings as Implication to Rethink the Principles of Criminal Procedure

Authors: Inga Žukovaitė

Abstract:

This paper aims to present postdoc research on remote criminal proceedings in court. In this period, when most countries have introduced the possibility of remote criminal proceedings in their procedural laws, it is not only possible to identify the weaknesses and strengths of the legal regulation but also assess the effectiveness of the instrument used and to develop an approach to the process. The example of some countries (for example, Italy) shows, on the one hand, that criminal procedure, based on orality and immediacy, does not lend itself to easy modifications that pose even a slight threat of devaluation of these principles in a society with well-established traditions of this procedure. On the other hand, such strong opposition and criticism make us ask whether we are facing the possibility of rethinking the traditional ways to understand the safeguards in order to preserve their essence without devaluing their traditional package but looking for new components to replace or compensate for the so-called “loss” of safeguards. The reflection on technological progress in the field of criminal procedural law indicates the need to rethink, on the basis of fundamental procedural principles, the safeguards that can replace or compensate for those that are in crisis as a result of the intervention of technological progress. Discussions in academic doctrine on the impact of technological interventions on the proceedings as such or on the limits of such interventions refer to the principles of criminal procedure as to a point of reference. In the context of the inferiority of technology, scholarly debate still addresses the issue of whether the court will not gradually become a mere site for the exercise of penal power with the resultant consequences – the deformation of the procedure itself as a physical ritual. In this context, this work seeks to illustrate the relationship between remote criminal proceedings in court and the principle of immediacy, the concept of which is based on the application of different models of criminal procedure (inquisitorial and adversarial), the aim is to assess the challenges posed for legal regulation by the interaction of technological progress with the principles of criminal procedure. The main hypothesis to be tested is that the adoption of remote proceedings is directly linked to the prevailing model of criminal procedure, arguing that the more principles of the inquisitorial model are applied to the criminal process, the more remote criminal trial is acceptable, and conversely, the more the criminal process is based on an adversarial model, more the remote criminal process is seen as incompatible with the principle of immediacy. In order to achieve this goal, the following tasks are set: to identify whether there is a difference in assessing remote proceedings with the immediacy principle between the adversarial model and the inquisitorial model, to analyse the main aspects of the regulation of remote criminal proceedings based on the examples of different countries (for example Lithuania, Italy, etc.).

Keywords: remote criminal proceedings, principle of orality, principle of immediacy, adversarial model inquisitorial model

Procedia PDF Downloads 68
1029 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 74
1028 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 266
1027 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: data analytics, smart cities, competitive advantage, internet of things

Procedia PDF Downloads 133
1026 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 408
1025 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 254
1024 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 78
1023 In vitro Effects of Porcine Follicular Fluid Proteins on Cell Culture Growth in Luteal Phase Porcine Oviductal Epithelial Cells

Authors: Mayuva Youngsabanant, Chanikarn Srinark, Supanyika Sengsai, Soratorn Kerdkriangkrai, Nongnuch Gumlungpat, Mayuree Pumipaiboon

Abstract:

The follicular fluid proteins of healthy medium size follicles (4-6 mm in diameters) and large size follicles (7-8 mm in diameter) of large white pig ovaries were collected by using sterile technique. They were used for testing the effect on primary in vitro cell culture growth of porcine oviductal epithelial cells (pOEC). Porcine oviductal epithelial cells of luteal phase was culture in M199 and added with 10% fetal calf serum 2.2 mg/mL, NaHCO₃, 0.25 mM pyruvate, 15 µg/mL and 50 µg/mL, gentamycin sulfate at high humidified atmosphere with 5% CO₂ in 95% air atmosphere at 37°C for 96 h before testing. The optimized concentration of pFF of two follicle sizes (at concentration of 2, 4, 20, 40, 200, 400, 500, and 600 µg proteins) in culture medium was observed for 24 h using MTT assay. Results were analyzed with a one-way ANOVA in SPSS statistic. Moreover, pOEC was also studied in morphological characteristic on long-term culture. The results of long-term study revealed that pOEC showed 70-80 percentage of healthy morphology on epithelial-like character and contained 30 percentage of an elongated shape (fibroblast-like morphology) at 4 weeks of culture time. MTT assay reviewed an increase in the percentage of viability of pOEC in 2 treated of follicular fluid groups. Two treatment concentration groups were higher than control group (p < 0.05) but not in positive control group. Interestingly, at 200 µg protein of 2 treated follicular fluid groups were reached the highest cell viability which is higher than a positive control and it is significantly different form control group (P < 0.05). These cells are developed and had fibroblast elongate shape which is longer than the cells in control group and positive control group. This report implies that pFF of medium follicle size at 200 µg proteins and large follicle size at 200 and 500 µg proteins could be optimized concentration for using as a supplement in culture medium to promote cell growth and development instead of growth hormone from fetal calf serum. It could be applied in cell biotechnology researches. Acknowledgements: The project was funded by a grant from Silpakorn University Research and Development Institute (SURDI) and Faculty of Science, Silpakorn University, Thailand.

Keywords: in vitro, porcine follicular fluid protein (pFF), porcine oviductal epithelial cells (pOEC), MTT

Procedia PDF Downloads 145
1022 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 208