Search results for: CPU core
412 Rethinking Confucianism and Democracy
Authors: He Li
Abstract:
Around the mid-1980s, Confucianism was reintroduced into China from Taiwan and Hong Kong as a result of China’s policies of reform and openness. Since then, the revival of neo-Confucianism in mainland China has accelerated and become a crucial component of the public intellectual sphere. The term xinrujia or xinruxue, loosely translated as “neo-Confucianism,” is increasingly understood as an intellectual and cultural phenomenon of the last four decades. The Confucian scholarship is in the process of restoration. This paper examines the Chinese intellectual discourse on Confucianism and democracy and places it in comparative and theoretical perspectives. With China’s rise and surge of populism in the West, particularly in the US, the leading political values of Confucianism could increasingly shape both China and the world at large. This state of affairs points to the need for more systematic efforts to assess the discourse on neo-Confucianism and its implications for China’s transformation. A number of scholars in the camp of neo-Confucianism maintain that some elements of Confucianism are not only compatible with democratic values and institutions but actually promote liberal democracy. They refer to it as Confucian democracy. By contrast, others either view Confucianism as a roadblock to democracy or envision that a convergence of democracy with Confucian values could result in a new hybrid system. The paper traces the complex interplay between Confucianism and democracy. It explores ideological differences between neo-Confucianism and liberal democracy and ascertains whether certain features of neo-Confucianism possess an affinity for the authoritarian political system. In addition to printed materials such as books and journal articles, a selection of articles from the website entitled Confucianism in China will be analyzed. The selection of this website is due to the fact that it is the leading website run by Chinese scholars focusing on neo-Confucianism. Another reason for selecting this website is its accessibility and availability. In the past few years, quite a few websites, left or right, were shut down by the authorities, but this website remains open. This paper explores the core components, dynamics, and implications of neo-Confucianism. My paper is divided into three parts. The first one discusses the origins of neo-Confucianism. The second section reviews the intellectual discourse among Chinese scholars on Confucian democracy. The third one explores the implications of the Chinese intellectual discourse on neo-Confucianism. Recently, liberal democracy has entered more conflict with official ideology. This paper, which is based on my extensive interviews in China prior to the pandemic and analysis of the primary sources in Chinese, will lay the foundation for a chapter on neo-Confucianism and democracy in my next book-length manuscript, tentatively entitled Chinese Intellectual Discourse on Democracy.Keywords: China, confucius, confucianism, neo-confucianism, democracy
Procedia PDF Downloads 81411 Digital Immunity System for Healthcare Data Security
Authors: Nihar Bheda
Abstract:
Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology
Procedia PDF Downloads 67410 Development of Electrochemical Biosensor Based on Dendrimer-Magnetic Nanoparticles for Detection of Alpha-Fetoprotein
Authors: Priyal Chikhaliwala, Sudeshna Chandra
Abstract:
Liver cancer is one of the most common malignant tumors with poor prognosis. This is because liver cancer does not exhibit any symptoms in early stage of disease. Increased serum level of AFP is clinically considered as a diagnostic marker for liver malignancy. The present diagnostic modalities include various types of immunoassays, radiological studies, and biopsy. However, these tests undergo slow response times, require significant sample volumes, achieve limited sensitivity and ultimately become expensive and burdensome to patients. Considering all these aspects, electrochemical biosensors based on dendrimer-magnetic nanoparticles (MNPs) was designed. Dendrimers are novel nano-sized, three-dimensional molecules with monodispersed structures. Poly-amidoamine (PAMAM) dendrimers with eight –NH₂ groups using ethylenediamine as a core molecule were synthesized using Michael addition reaction. Dendrimers provide added the advantage of not only stabilizing Fe₃O₄ NPs but also displays capability of performing multiple electron redox events and binding multiple biological ligands to its dendritic end-surface. Fe₃O₄ NPs due to its superparamagnetic behavior can be exploited for magneto-separation process. Fe₃O₄ NPs were stabilized with PAMAM dendrimer by in situ co-precipitation method. The surface coating was examined by FT-IR, XRD, VSM, and TGA analysis. Electrochemical behavior and kinetic studies were evaluated using CV which revealed that the dendrimer-Fe₃O₄ NPs can be looked upon as electrochemically active materials. Electrochemical immunosensor was designed by immobilizing anti-AFP onto dendrimer-MNPs by gluteraldehyde conjugation reaction. The bioconjugates were then incubated with AFP antigen. The immunosensor was characterized electrochemically indicating successful immuno-binding events. The binding events were also further studied using magnetic particle imaging (MPI) which is a novel imaging modality in which Fe₃O₄ NPs are used as tracer molecules with positive contrast. Multicolor MPI was able to clearly localize AFP antigen and antibody and its binding successfully. Results demonstrate immense potential in terms of biosensing and enabling MPI of AFP in clinical diagnosis.Keywords: alpha-fetoprotein, dendrimers, electrochemical biosensors, magnetic nanoparticles
Procedia PDF Downloads 136409 An Optimal Hybrid EMS System for a Hyperloop Prototype Vehicle
Authors: J. F. Gonzalez-Rojo, Federico Lluesma-Rodriguez, Temoatzin Gonzalez
Abstract:
Hyperloop, a new mode of transport, is gaining significance. It consists of the use of a ground-based transport system which includes a levitation system, that avoids rolling friction forces, and which has been covered with a tube, controlling the inner atmosphere lowering the aerodynamic drag forces. Thus, hyperloop is proposed as a solution to the current limitation on ground transportation. Rolling and aerodynamic problems, that limit large speeds for traditional high-speed rail or even maglev systems, are overcome using a hyperloop solution. Zeleros is one of the companies developing technology for hyperloop application worldwide. It is working on a concept that reduces the infrastructure cost and minimizes the power consumption as well as the losses associated with magnetic drag forces. For this purpose, Zeleros proposes a Hybrid ElectroMagnetic Suspension (EMS) for its prototype. In the present manuscript an active and optimal electromagnetic suspension levitation method based on nearly zero power consumption individual modules is presented. This system consists of several hybrid permanent magnet-coil levitation units that can be arranged along the vehicle. The proposed unit manages to redirect the magnetic field along a defined direction forming a magnetic circuit and minimizing the loses due to field dispersion. This is achieved using an electrical steel core. Each module can stabilize the gap distance using the coil current and either linear or non-linear control methods. The ratio between weight and levitation force for each unit is 1/10. In addition, the quotient between the lifted weight and power consumption at the target gap distance is 1/3 [kg/W]. One degree of freedom (DoF) (along the gap direction) is controlled by a single unit. However, when several units are present, a 5 DoF control (2 translational and 3 rotational) can be achieved, leading to the full attitude control of the vehicle. The proposed system has been successfully tested reaching TRL-4 in a laboratory test bench and is currently in TRL-5 state development if the module association in order to control 5 DoF is considered.Keywords: active optimal control, electromagnetic levitation, HEMS, high-speed transport, hyperloop
Procedia PDF Downloads 146408 Greek Teachers' Understandings of Typical Language Development and of Language Difficulties in Primary School Children and Their Approaches to Language Teaching
Authors: Konstantina Georgali
Abstract:
The present study explores Greek teachers’ understandings of typical language development and of language difficulties. Its core aim was to highlight that teachers need to have a thorough understanding of educational linguistics, that is of how language figures in education. They should also be aware of how language should be taught so as to promote language development for all students while at the same time support the needs of children with language difficulties in an inclusive ethos. The study, thus argued that language can be a dynamic learning mechanism in the minds of all children and a powerful teaching tool in the hands of teachers and provided current research evidence to show that structural and morphological particularities of native languages- in this case, of the Greek language- can be used by teachers to enhance children’s understanding of language and simultaneously improve oral language skills for children with typical language development and for those with language difficulties. The research was based on a Sequential Exploratory Mixed Methods Design deployed in three consecutive and integrative phases. The first phase involved 18 exploratory interviews with teachers. Its findings informed the second phase involving a questionnaire survey with 119 respondents. Contradictory questionnaire results were further investigated in a third phase employing a formal testing procedure with 60 children attending Y1, Y2 and Y3 of primary school (a research group of 30 language impaired children and a comparison group of 30 children with typical language development, both identified by their class teachers). Results showed both strengths and weaknesses in teachers’ awareness of educational linguistics and of language difficulties. They also provided a different perspective of children’s language needs and of language teaching approaches that reflected current advances and conceptualizations of language problems and opened a new window on how best they can be met in an inclusive ethos. However, teachers barely used teaching approaches that could capitalize on the particularities of the Greek language to improve language skills for all students in class. Although they seemed to realize the importance of oral language skills and their knowledge base on language related issues was adequate, their practices indicated that they did not see language as a dynamic teaching and learning mechanism that can promote children’s language development and in tandem, improve academic attainment. Important educational implications arose and clear indications of the generalization of findings beyond the Greek educational context.Keywords: educational linguistics, inclusive ethos, language difficulties, typical language development
Procedia PDF Downloads 382407 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 118406 Using Action Based Research to Examine the Effects of Co-Teaching on Middle School and High School Student Achievement in Math and Language Arts
Authors: Kathleen L. Seifert
Abstract:
Students with special needs are expected to achieve the same academic standards as their general education peers, yet many students with special needs are pulled-out of general content instruction. Because of this, many students with special needs are denied content knowledge from a content expert and instead receive content instruction in a more restrictive setting. Collaborative teaching, where a general education and special education teacher work alongside each other in the same classroom, has become increasingly popular as a means to meet the diverse needs of students in America’s public schools. The idea behind co-teaching is noble; to ensure students with special needs receive content area instruction from a content expert while also receiving the necessary supports to be successful. However, in spite of this noble effort, the effects of co-teaching are not always positive. The reasons why have produced several hypotheses, one of which has to do with lack of proper training and implementation of effective evidence-based co-teaching practices. In order to examine the effects of co-teacher training, eleven teaching pairs from a small mid-western school district in the United States participated in a study. The purpose of the study was to examine the effects of co-teacher training on middle and high school student achievement in Math and Language Arts. A local university instructor provided teachers with training in co-teaching via a three-day workshop. In addition, co-teaching pairs were given the opportunity for direct observation and feedback using the Co-teaching Core Competencies Observation Checklist throughout the academic year. Data are in the process of being collected on both the students enrolled in the co-taught classes as well as on the teachers themselves. Student data compared achievement on standardized assessments and classroom performance across three domains: 1. General education students compared to students with special needs in co-taught classrooms, 2. Students with special needs in classrooms with and without co-teaching, 3. Students in classrooms where teachers were given observation and feedback compared to teachers who refused the observation and feedback. Teacher data compared the perceptions of the co-teaching initiative between teacher pairs who received direct observation and feedback from those who did not. The findings from the study will be shared with the school district and used for program improvement.Keywords: collabortive teaching, collaboration, co-teaching, professional development
Procedia PDF Downloads 119405 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction
Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter
Abstract:
Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a real-time simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three Velmex XSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.Keywords: surgical robot, haptic feedback, MATLAB, strain gage, simulink
Procedia PDF Downloads 534404 Self-Inflating Soft Tissue Expander Outcome for Alveolar Ridge Augmentation a Randomized Controlled Clinical and Histological Study
Authors: Alaa T. Ali, Nevine H. Kheir El Din, Ehab S. Abdelhamid, Ahmed E. Amr
Abstract:
Objective: Severe alveolar bone resorption is usually associated with a deficient amount of soft tissues. soft tissue expansion is introduced to provide an adequate amount of soft tissue over the grafted area. This study aimed to assess the efficacy of sub-periosteal self-inflating osmotic tissue expanders used as preparatory surgery before horizontal alveolar ridge augmentation using autogenous onlay block bone graft. Methods: A prospective randomized controlled clinical trial was performed. Sixteen partially edentulous patients demanding horizontal bone augmentation in the anterior maxilla were randomly assigned to horizontal ridge augmentation with autogenous bone block grafts harvested from the mandibular symphysis. For the test group, soft tissue expanders were placed sub-periosteally before horizontal ridge augmentation. Impressions were taken before and after STE, and the cast models were optically scanned and superimposed to be used for volumetric analysis. Horizontal ridge augmentation was carried out after STE completion. For the control group, a periosteal releasing incision was performed during bone augmentation procedures. Implants were placed in both groups at re-entry surgery after six months period. A core biopsy was taken. Histomorphometric assessment for newly formed bone surface area, mature collagen area fraction, the osteoblasts count, and blood vessel count were performed. The change in alveolar ridge width was evaluated through bone caliper and CBCT. Results: Soft tissue expander successfully provides a Surplus amount of soft tissues in 5 out of 8 patients in the test group. Complications during the expansion period were perforation through oral mucosa occurred in two patients. Infection occurred in one patient. The mean soft tissue volume gain was 393.9 ± 322mm. After 6 months. The mean horizontal bone gains for the test and control groups were 3.14 mm and 3.69 mm, respectively. Conclusion: STE with a sub-periosteal approach is an applicable method to achieve an additional soft tissue and to reduce bone block graft exposure and wound dehiscence.Keywords: soft tissue expander, ridge augmentation, block graft, symphysis bone block
Procedia PDF Downloads 125403 Polymeric Sustained Biodegradable Patch Formulation for Wound Healing
Authors: Abhay Asthana, Gyati Shilakari Asthana
Abstract:
It’s the patient compliance and stability in combination with controlled drug delivery and biocompatibility that forms the core feature in present research and development of sustained biodegradable patch formulation intended for wound healing. The aim was to impart sustained degradation, sterile formulation, significant folding endurance, elasticity, biodegradability, bio-acceptability and strength. The optimized formulation was developed using component including polymers including Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric mixture dissolved in geometric order in suitable medium through continuous stirring under ambient conditions. With continued stirring Curcumin was added with aid of DCM and Methanol in optimized ratio to get homogenous dispersion. The dispersion was sonicated with optimum frequency and for given time and later casted to form a patch form. All steps were carried out under under strict aseptic conditions. The formulations obtained in the acceptable working range were decided based on thickness, uniformity of drug content, smooth texture and flexibility and brittleness. The patch kept on stability using butter paper in sterile pack displayed folding endurance in range of 20 to 23 times without any evidence of crack in an optimized formulation at room temperature (RT) (24 ± 2°C). The patch displayed acceptable parameters after stability study conducted in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) upto 90 days. Further, no significant changes were observed in critical parameters such as elasticity, biodegradability, drug release and drug content during stability study conducted at RT 24±2°C for 45 and 90 days. The drug content was in range 95 to 102%, moisture content didn’t exceeded 19.2% and patch passed the content uniformity test. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as drug release with correlation factor R2>0.9. The biodegradable patch based formulation developed shows promising results in terms of stability and release profiles.Keywords: sustained biodegradation, wound healing, polymers, stability
Procedia PDF Downloads 332402 Isolate-Specific Variations among Clinical Isolates of Brucella Identified by Whole-Genome Sequencing, Bioinformatics and Comparative Genomics
Authors: Abu S. Mustafa, Mohammad W. Khan, Faraz Shaheed Khan, Nazima Habibi
Abstract:
Brucellosis is a zoonotic disease of worldwide prevalence. There are at least four species and several strains of Brucella that cause human disease. Brucella genomes have very limited variation across strains, which hinder strain identification using classical molecular techniques, including PCR and 16 S rDNA sequencing. The aim of this study was to perform whole genome sequencing of clinical isolates of Brucella and perform bioinformatics and comparative genomics analyses to determine the existence of genetic differences across the isolates of a single Brucella species and strain. The draft sequence data were generated from 15 clinical isolates of Brucella melitensis (biovar 2 strain 63/9) using MiSeq next generation sequencing platform. The generated reads were used for further assembly and analysis. All the analysis was performed using Bioinformatics work station (8 core i7 processor, 8GB RAM with Bio-Linux operating system). FastQC was used to determine the quality of reads and low quality reads were trimmed or eliminated using Fastx_trimmer. Assembly was done by using Velvet and ABySS softwares. The ordering of assembled contigs was performed by Mauve. An online server RAST was employed to annotate the contigs assembly. Annotated genomes were compared using Mauve and ACT tools. The QC score for DNA sequence data, generated by MiSeq, was higher than 30 for 80% of reads with more than 100x coverage, which suggested that data could be utilized for further analysis. However when analyzed by FastQC, quality of four reads was not good enough for creating a complete genome draft so remaining 11 samples were used for further analysis. The comparative genome analyses showed that despite sharing same gene sets, single nucleotide polymorphisms and insertions/deletions existed across different genomes, which provided a variable extent of diversity to these bacteria. In conclusion, the next generation sequencing, bioinformatics, and comparative genome analysis can be utilized to find variations (point mutations, insertions and deletions) across different genomes of Brucella within a single strain. This information could be useful in surveillance and epidemiological studies supported by Kuwait University Research Sector grants MI04/15 and SRUL02/13.Keywords: brucella, bioinformatics, comparative genomics, whole genome sequencing
Procedia PDF Downloads 383401 Atmospheres, Ghosts and Shells to Reform our Memorial Cultures
Authors: Tomas Macsotay
Abstract:
If monument removal and monument effacement may call to mind a Nietzschean proposal for vitalist disregard of conventional morality, it remains the case that it is often only by a willingness to go “beyond good and evil” in inherited monument politics that truthful, be it unexpected aspects of our co-existence with monuments can finally start to rise into fuller consciousness. A series of urgent questions press themselves in the panorama created by the affirmative idea that we can, as a community, make crucial decisions with regard to monumental preservation or discontinuation. Memorials are not the core concern for decolonial and racial dignity movements like Black Lives Matter (BLM), which have repeatedly shown they regard these actions as a welcome, albeit complementary, part of a reckoning with a past of racial violence and injustice, slavery, and colonial subaltern existence. As such, the iconoclastic issue of “rights and prohibitions of images” only tangentially touches on a cultural movement that seems rather question dominant ideas of history, pertinence, and the long life of the class, gender, and racial conflict through ossified memorial cultures. In the recent monument insurrection, we face a rare case of a new negotiation of rights of existence for this particular tract of material culture. This engenders a debate on how and why we accord rights to objects in public dominion ― indeed, how such rights impinge upon the rights of subjects who inhabit the public sphere. Incidentally, the possibility of taking away from monuments such imagined or adjoined rights has made it possible to tease open a sphere of emotionality that could not be expressed in patrimonial thinking: the reality of atmospheres as settings, often dependent on pseudo-objects and half-conscious situations, that situate individuals involuntarily in a pathic aesthetics. In this way, the unique moment we now witness ― full of the possibility of going “beyond good and evil” of monument preservation ― starts to look more like a moment of involuntary awaking: an awakening to the encrypted gaze of the monument and the enigma that the same monument or memorial site can carry day-to-day habits of life for some bystanders, while racialized and disenfranchised communities experience discomfort and erosion of subjective life in the same sites.Keywords: monument, memorial, atmosphere, racial justice, decolonialism
Procedia PDF Downloads 80400 The Conundrum of Marital Rape in Malawi: The Past, the Present and the Future
Authors: Esther Gumboh
Abstract:
While the definition of rape has evolved over the years and now differs from one jurisdiction to another, at the heart of the offence remains the absence of consent on the part of the victim. In simple terms, rape consists in non-consensual sexual intercourse. Therefore, the core issue is whether the accused acted with the consent of the victim. Once it is established that the act was consensual, a conviction of rape cannot be secured. Traditionally, rape within marriage was impossible because it was understood that a woman gave irrevocable consent to sex with her husband throughout the duration of the marriage. This position has since changed in most jurisdictions. Indeed, Malawian law now recognises the offence of marital rape. This is a victory for women’s rights and gender equality. Curiously, however, the definition of marital rape endorsed differs from the standard understanding of rape as non-consensual sex. Instead, the law has introduced the concept of unreasonableness of the refusal to engage in sex as a defence to an accused. This is an alarming position that undermines the protection sought to be derived from the criminalisation of rape within marriage. Moreover, in the Malawian context where rape remains an offence only men can commit against women, the current legal framework for marital rape perpetuates the societal misnomer that a married woman gives a once-off consent to sexual intercourse by virtue of marriage. This takes us back to the old common law position which many countries have moved away from. The present definition of marital rape under Malawian law also sits at odd with the nature of rape that is applicable to all other instances of non-consensual sexual intercourse. Consequently, the law fails to protect married women from unwanted sexual relations at the hands of their husbands. This paper critically examines the criminalisation of marital rape in Malawi. It commences with a historical account of the conceptualisation of rape and then looks at judgments that rejected the validity of marital rape. The discussion then moves to the debates that preceded the criminalisation of marital rape in Malawi and how the Law Commission reasoned to finally make a recommendation in its favour. Against this background, the paper analyses the legal framework for marital rape and what this means for the elements of the offence and defences that may be raised by an accused. In the final analysis, this contribution recommends that there is need to amend the definition of marital rape. Better still, the law should simply state that the fact of marriage is not a defence to a charge of rape, or, in other words, that there is no marital rape exemption. This would automatically mean that husbands are subjected to the same criminal law principles as their unmarried counterparts when it comes to non-consensual sexual intercourse with their wives.Keywords: criminal law, gender, Malawi, marital rape, rape, sexual intercourse
Procedia PDF Downloads 354399 Study on Spatial Structure and Evolvement Process of Traditional Villages’ Courtyard Based on Clannism
Abstract:
The origination and development of Chinese traditional villages have a strong link with clan society. Thousands of traditional villages are constituted by one big family who have the same surname. Villages’ basic social relationships are built on the basis of family kinship. Clan power controls family courtyards’ spatial structure and influences their evolvement process. Compared with other countries, research from perspective of clanism is a particular and universally applicable manner to recognize Chinese traditional villages’ space features. This paper takes traditional villages in astern Zhejiang province as examples, especially a single-clan village named Zoumatang. Through combining rural sociology with architecture, it clarifies the coupling relationship between clan structure and village space, reveals spatial composition and evolvement logic of family courtyards. Clan society pays much attention to the patrilineal kinship and genealogy. In astern Zhejiang province, clan is usually divided to ‘clan-branches-families’ three levels. Its structural relationship looks like pyramid, which results in ‘center-margin’ structure when projecting to villages’ space. Due to the cultural tradition of ancestor worship, family courtyards’ space exist similar ‘center-margin’ structure. Ancestor hall and family temple are respectively the space core of village and courtyard. Other parts of courtyard also shows order of superiority and inferiority. Elder and men must be the first. However, along with the disintegration of clan society, family courtyard gradually appears fragmentation trend. Its spatial structure becomes more and more flexible and its scale becomes smaller and smaller. Living conditions rather than ancestor worship turn out to be primary consideration. As a result, there are different courtyard historical prototype in different historic period. To some extent, Chinese present traditional villages’ conservation ignore the impact of clan society. This paper discovers the social significance of courtyard’s spatial texture and rebuilds the connection between society and space. It is expected to promote Chinese traditional villages’ conservation paying more attention to authenticity which defined in the historical process and integrity which built on the basis of social meaning.Keywords: China, clanism, courtyard, evolvement process, spatial structure, traditional village
Procedia PDF Downloads 320398 Experiments to Study the Vapor Bubble Dynamics in Nucleate Pool Boiling
Authors: Parul Goel, Jyeshtharaj B. Joshi, Arun K. Nayak
Abstract:
Nucleate boiling is characterized by the nucleation, growth and departure of the tiny individual vapor bubbles that originate in the cavities or imperfections present in the heating surface. It finds a wide range of applications, e.g. in heat exchangers or steam generators, core cooling in power reactors or rockets, cooling of electronic circuits, owing to its highly efficient transfer of large amount of heat flux over small temperature differences. Hence, it is important to be able to predict the rate of heat transfer and the safety limit heat flux (critical heat flux, heat flux higher than this can lead to damage of the heating surface) applicable for any given system. A large number of experimental and analytical works exist in the literature, and are based on the idea that the knowledge of the bubble dynamics on the microscopic scale can lead to the understanding of the full picture of the boiling heat transfer. However, the existing data in the literature are scattered over various sets of conditions and often in disagreement with each other. The correlations obtained from such data are also limited to the range of conditions they were established for and no single correlation is applicable over a wide range of parameters. More recently, a number of researchers have been trying to remove empiricism in the heat transfer models to arrive at more phenomenological models using extensive numerical simulations; these models require state-of-the-art experimental data for a wide range of conditions, first for input and later, for their validation. With this idea in mind, experiments with sub-cooled and saturated demineralized water have been carried out under atmospheric pressure to study the bubble dynamics- growth rate, departure size and frequencies for nucleate pool boiling. A number of heating elements have been used to study the dependence of vapor bubble dynamics on the heater surface finish and heater geometry along with the experimental conditions like the degree of sub-cooling, super heat and the heat flux. An attempt has been made to compare the data obtained with the existing data and the correlations in the literature to generate an exhaustive database for the pool boiling conditions.Keywords: experiment, boiling, bubbles, bubble dynamics, pool boiling
Procedia PDF Downloads 302397 Ultrastructural Characterization of Lipid Droplets of Rat Hepatocytes after Whole Body 60-Cobalt Gamma Radiation
Authors: Ivna Mororó, Lise P. Labéjof, Stephanie Ribeiro, Kely Almeida
Abstract:
Lipid droplets (LDs) are normally presented in greater or lesser number in the cytoplasm of almost all eukaryotic and some prokaryotic cells. They are independent organelles composed of a lipid ester core and a surface phospholipid monolayer. As a lipid storage form, they provide an available source of energy for the cell. Recently it was demonstrated that they play an important role in other many cellular processes. Among the many unresolved questions about them, it is not even known how LDs is formed, how lipids are recruited to LDs and how they interact with the other organelles. Excess fat in the organism is pathological and often associated with the development of some genetic, hormonal or behavioral diseases. The formation and accumulation of lipid droplets in the cytoplasm can be increased by exogenous physical or chemical agents. It is well known that ionizing radiation affects lipid metabolism resulting in increased lipogenesis in cells, but the details of this process are unknown. To better understand the mode of formation of LDs in liver cells, we investigate their ultrastructural morphology after irradiation. For that, Wistar rats were exposed to whole body gamma radiation from 60-cobalt at various single doses. Samples of the livers were processed for analysis under a conventional transmission electron microscope. We found that when compared to controls, morphological changes in liver cells were evident at the higher doses of radiation used. It was detected a great number of lipid droplets of different sizes and homogeneous content and some of them merged each other. In some cells, it was observed diffused LDs, not limited by a monolayer of phospholipids. This finding suggests that the phospholipid monolayer of the LDs was disrupted by ionizing radiation exposure that promotes lipid peroxydation of endo membranes. Thus the absence of the phospholipid monolayer may prevent the realization of some cellular activities as follow: - lipid exocytosis which requires the merging of LDs membrane with the plasma membrane; - the interaction of LDs with other membrane-bound organelles such as the endoplasmic reticulum (ER), the golgi and mitochondria and; - lipolysis of lipid esters contained in the LDs which requires the presence of enzymes located in membrane-bound organelles as ER. All these impediments can contribute to lipid accumulation in the cytoplasm and the development of diseases such as liver steatosis, cirrhosis and cancer.Keywords: radiobiology, hepatocytes, lipid metabolism, transmission electron microscopy
Procedia PDF Downloads 314396 Modern Wars: States Responsibility
Authors: Lakshmi Chebolu
Abstract:
'War’, the word itself, is so vibrant and handcuffs the entire society. Since the beginning of manhood, the world has been evident in constant struggles. However, along with the growth of communities, relations, on the one hand, and disputes, on the other hand, infinitely increased. When states cannot or will not settle their disputes or differences by means of peaceful agreements, weapons are suddenly made to speak. It does not mean states can engage in war whenever they desire. At an international level, there has been a vast development of the law of war in the 20th century. War, it may be internal or international, in all situations, belligerent actors should follow the principles of warfare. With the advent of technology, the shape of war has changed, and it violates fundamental principles without observing basic norms. Conversely, states' attitudes towards international relationships are also undermined to some extent as state parties are not prioritized the communal interest rather than political or individual interest. In spite of the persistent development of communities, still many people are innocent victims of modern wars. It costs a toll on many lives, liberties, and properties and remains a major obstacle to nations' development. Recent incidents in Afghan are a live example to World Nations. We know that the principles of international law cannot be implemented very strictly on perpetrators due to the lacuna in the international legal system. However, the rules of war are universal in nature. The Geneva Convention, 1949 which are the core element of IHL, has been ratified by all 196 States. In fact, very few international treaties received this much of big support from nations. State’s approach towards Modern International Law, places a heavy burden on States practice towards in implementation of law. Although United Nations Security Council possesses certain powers under ‘Pacific Settlement of Disputes’, (Chapter VI) of the United Nations Charter to prevent disputes in a peaceful manner, conversely, this practice has been overlooked for many years due to political interests, favor, etc. Despite international consensus on the prohibition of war and protection of fundamental freedoms and human dignity, still, often, law has been misused by states’. The recent tendencies trigger questions about states’ willingness towards the implementation of the law. In view of the existing practices of nations, this paper aims to elevate the legal obligations of the international community to save the succeeding generations from the scourge of modern war practices.Keywords: modern wars, weapons, prohibition and suspension of war activities, states’ obligations
Procedia PDF Downloads 81395 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations
Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev
Abstract:
This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.Keywords: generative design, computational design, parametric design, algorithmic modeling
Procedia PDF Downloads 65394 Nanoparticles-Protein Hybrid-Based Magnetic Liposome
Authors: Amlan Kumar Das, Avinash Marwal, Vikram Pareek
Abstract:
Liposome plays an important role in medical and pharmaceutical science as e.g. nano scale drug carriers. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment. Magnet-driven liposome used for the targeted delivery of drugs to organs and tissues1. These liposome preparations contain encapsulated drug components and finely dispersed magnetic particles. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment that are generated in vitro. These are useful in terms of biocompatibility, biodegradability, and low toxicity, and can control biodistribution by changing the size, lipid composition, and physical characteristics2. Furthermore, liposomes can entrap both hydrophobic and hydrophilic drugs and are able to continuously release the entrapped substrate, thus being useful drug carriers. Magnetic liposomes (MLs) are phospholipid vesicles that encapsulate magneticor paramagnetic nanoparticles. They are applied as contrast agents for magnetic resonance imaging (MRI)3. The biological synthesis of nanoparticles using plant extracts plays an important role in the field of nanotechnology4. Green-synthesized magnetite nanoparticles-protein hybrid has been produced by treating Iron (III)/Iron(II) chloride with the leaf extract of Dhatura Inoxia. The phytochemicals present in the leaf extracts act as a reducing as well stabilizing agents preventing agglomeration, which include flavonoids, phenolic compounds, cardiac glycosides, proteins and sugars. The magnetite nanoparticles-protein hybrid has been trapped inside the aqueous core of the liposome prepared by reversed phase evaporation (REV) method using oleic and linoleic acid which has been shown to be driven under magnetic field confirming the formation magnetic liposome (ML). Chemical characterization of stealth magnetic liposome has been performed by breaking the liposome and release of magnetic nanoparticles. The presence iron has been confirmed by colour complex formation with KSCN and UV-Vis study using spectrophotometer Cary 60, Agilent. This magnet driven liposome using nanoparticles-protein hybrid can be a smart vesicles for the targeted drug delivery.Keywords: nanoparticles-protein hybrid, magnetic liposome, medical, pharmaceutical science
Procedia PDF Downloads 248393 Discovering Event Outliers for Drug as Commercial Products
Authors: Arunas Burinskas, Aurelija Burinskiene
Abstract:
On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.Keywords: drugs, Grubbs' test, outlier, shortage event
Procedia PDF Downloads 134392 Research on Spatial Pattern and Spatial Structure of Human Settlement from the View of Spatial Anthropology – A Case Study of the Settlement in Sizhai Village, City of Zhuji, Zhejiang Province, China
Authors: Ni Zhenyu
Abstract:
A human settlement is defined as the social activities, social relationships and lifestyles generated within a certain territory, which is also relatively independent territorial living space and domain composed of common people. Along with the advancement of technology and the development of society, the idea, presented in traditional research, that human settlements are deemed as substantial organic integrity with strong autonomy, are more often challenged nowadays. Spatial form of human settlements is one of the most outstanding external expressions with its subjectivity and autonomy, nevertheless, the projections of social, economic activities on certain territories are even more significant. What exactly is the relationship between human beings and the spatial form of the settlements where they live in? a question worth thinking over has been raised, that if a new view, a spatial anthropological one , can be constructed to review and respond to spatial form of human settlements based on research theories and methods of cultural anthropology within the profession of architecture. This article interprets how the typical spatial form of human settlements in the basin area of Bac Giang Province is formed under the collective impacts of local social order, land use condition, topographic features, and social contracts. A particular case of the settlement in Sizhai Village, City of Zhuji, Zhejiang Province is chosen to study for research purpose. Spatial form of human settlements are interpreted as a modeled integrity affected corporately by dominant economy, social patterns, key symbol marks and core values, etc.. Spatial form of human settlements, being a structured existence, is a materialized, behavioral, and social space; it can be considered as a place where human beings realize their behaviors and a path on which the continuity of their behaviors are kept, also for social practice a territory where currant social structure and social relationships are maintained, strengthened and rebuilt. This article aims to break the boundary of understanding that spatial form of human settlements is pure physical space, furthermore, endeavors to highlight the autonomy status of human beings, focusing on their relationships with certain territories, their interpersonal relationships, man-earth relationships and the state of existence of human beings, elaborating the deeper connotation behind spatial form of human settlements.Keywords: spatial anthropology, human settlement, spatial pattern, spatial structure
Procedia PDF Downloads 411391 In Silico Analysis of Deleterious nsSNPs (Missense) of Dihydrolipoamide Branched-Chain Transacylase E2 Gene Associated with Maple Syrup Urine Disease Type II
Authors: Zainab S. Ahmed, Mohammed S. Ali, Nadia A. Elshiekh, Sami Adam Ibrahim, Ghada M. El-Tayeb, Ahmed H. Elsadig, Rihab A. Omer, Sofia B. Mohamed
Abstract:
Maple syrup urine (MSUD) is an autosomal recessive disease that causes a deficiency in the enzyme branched-chain alpha-keto acid (BCKA) dehydrogenase. The development of disease has been associated with SNPs in the DBT gene. Despite that, the computational analysis of SNPs in coding and noncoding and their functional impacts on protein level still remains unknown. Hence, in this study, we carried out a comprehensive in silico analysis of missense that was predicted to have a harmful influence on DBT structure and function. In this study, eight different in silico prediction algorithms; SIFT, PROVEAN, MutPred, SNP&GO, PhD-SNP, PANTHER, I-Mutant 2.0 and MUpo were used for screening nsSNPs in DBT including. Additionally, to understand the effect of mutations in the strength of the interactions that bind protein together the ELASPIC servers were used. Finally, the 3D structure of DBT was formed using Mutation3D and Chimera servers respectively. Our result showed that a total of 15 nsSNPs confirmed by 4 software (R301C, R376H, W84R, S268F, W84C, F276C, H452R, R178H, I355T, V191G, M444T, T174A, I200T, R113H, and R178C) were found damaging and can lead to a shift in DBT gene structure. Moreover, we found 7 nsSNPs located on the 2-oxoacid_dh catalytic domain, 5 nsSNPs on the E_3 binding domain and 3 nsSNPs on the Biotin Domain. So these nsSNPs may alter the putative structure of DBT’s domain. Furthermore, we detected all these nsSNPs are on the core residues of the protein and have the ability to change the stability of the protein. Additionally, we found W84R, S268F, and M444T have high significance, and they affected Leucine, Isoleucine, and Valine, which reduces or disrupt the function of BCKD complex, E2-subunit which the DBT gene encodes. In conclusion, based on our extensive in-silico analysis, we report 15 nsSNPs that have possible association with protein deteriorating and disease-causing abilities. These candidate SNPs can aid in future studies on Maple Syrup Urine Disease type II base in the genetic level.Keywords: DBT gene, ELASPIC, in silico analysis, UCSF chimer
Procedia PDF Downloads 201390 Arterial Line Use for Acute Type 2 Respiratory Failure
Authors: C. Scurr, J. Jeans, S. Srivastava
Abstract:
Introduction: Acute type two respiratory failure (T2RF) has become a common presentation over the last two decades primarily due to an increase in the prevalence of chronic lung disease. Acute exacerbations can be managed either medically or in combination with non-invasive ventilation (NIV) which should be monitored with regular arterial blood gas samples (ABG). Arterial lines allow more frequent arterial blood sampling with less patient discomfort. We present the experience from a teaching hospital emergency department (ED) and level 2 medical high-dependency unit (HDU) that together form the pathway for management of acute type 2 respiratory failure. Methods: Patients acutely presenting to Charing Cross Hospital, London, with T2RF requiring non-invasive ventilation (NIV) over 14 months (2011 to 2012) were identified from clinical coding. Retrospective data collection included: demographics, co-morbidities, blood gas numbers and timing, if arterial lines were used and who performed this. Analysis was undertaken using Microsoft Excel. Results: Coding identified 107 possible patients. 69 notes were available, of which 41 required NIV for type 2 respiratory failure. 53.6% of patients had an arterial line inserted. Patients with arterial lines had 22.4 ABG in total on average compared to 8.2 for those without. These patients had a similar average time to normalizing pH of (23.7 with arterial line vs 25.6 hours without), and no statistically significant difference in mortality. Arterial lines were inserted by Foundation year doctors, Core trainees, Medical registrars as well as the ICU registrar. 63% of these were performed by the medical registrar rather than ICU, ED or a junior doctor. This is reflected in that the average time until an arterial line was inserted was 462 minutes. The average number of ABGs taken before an arterial line was 2 with a range of 0 – 6. The average number of gases taken if no arterial line was ever used was 7.79 (range of 2-34) – on average 4 times as many arterial punctures for each patient. Discussion: Arterial line use was associated with more frequent arterial blood sampling during each inpatient admission. Additionally, patients with an arterial line have less individual arterial punctures in total and this is likely more comfortable for the patient. Arterial lines are normally sited by medical registrars, however this is normally after some delay. ED clinicians could improve patient comfort and monitoring thus allowing faster titration of NIV if arteral lines were regularly inserted in the ED. We recommend that ED doctors insert arterial lines when indicated in order improve the patient experience and facilitate medical management.Keywords: non invasive ventilation, arterial blood gas, acute type, arterial line
Procedia PDF Downloads 428389 The Preceptorship Experience and Clinical Competence of Final Year Nursing Students
Authors: Susan Ka Yee Chow
Abstract:
Effective clinical preceptorship is affecting students’ competence and fostering their growth in applying theoretical knowledge and skills in clinical settings. Any difference between the expected and actual learning experience will reduce nursing students’ interest in clinical practices and having a negative consequence with their clinical performance. This cross-sectional study is an attempt to compare the differences between preferred and actual preceptorship experience of final year nursing students, and to examine the relationship between the actual preceptorship experience and perceived clinical competence of the students in a tertiary institution. Participants of the study were final year bachelor nursing students of a self-financing tertiary institution in Hong Kong. The instruments used to measure the effectiveness of clinical preceptorship was developed by the participating institution. The scale consisted of five items in a 5-point likert scale. The questions including goals development, critical thinking, learning objectives, asking questions and providing feedback to students. The “Clinical Competence Questionnaire” by Liou & Cheng (2014) was used to examine students’ perceived clinical competences. The scale consisted of 47 items categorized into four domains, namely nursing professional behaviours; skill competence: general performance; skill competence: core nursing skills and skill competence: advanced nursing skills. There were 193 questionnaires returned with a response rate of 89%. The paired t-test was used to compare the differences between preferred and actual preceptorship experiences of students. The results showed significant differences (p<0.001) for the five questions. The mean for the preferred scores is higher than the actual scores resulting statistically significance. The maximum mean difference was accepted goal and the highest mean different was giving feedback. The Pearson Correlation Coefficient was used to examine the relationship. The results showed moderate correlations between nursing professional behaviours with asking questions and providing feedback. Providing useful feedback to students is having moderate correlations with all domains of the Clinical Competence Questionnaire (r=0.269 – 0.345). It is concluded that nursing students do not have a positive perception of the clinical preceptorship. Their perceptions are significantly different from their expected preceptorship. If students were given more opportunities to ask questions in a pedagogical atmosphere, their perceived clinical competence and learning outcomes could be improved as a result.Keywords: clinical preceptor, clinical competence, clinical practicum, nursing students
Procedia PDF Downloads 127388 Logistics and Supply Chain Management Using Smart Contracts on Blockchain
Authors: Armen Grigoryan, Milena Arakelyan
Abstract:
The idea of smart logistics is still quite a complicated one. It can be used to market products to a large number of customers or to acquire raw materials of the highest quality at the lowest cost in geographically dispersed areas. The use of smart contracts in logistics and supply chain management has the potential to revolutionize the way that goods are tracked, transported, and managed. Smart contracts are simply computer programs written in one of the blockchain programming languages (Solidity, Rust, Vyper), which are capable of self-execution once the predetermined conditions are met. They can be used to automate and streamline many of the traditional manual processes that are currently used in logistics and supply chain management, including the tracking and movement of goods, the management of inventory, and the facilitation of payments and settlements between different parties in the supply chain. Currently, logistics is a core area for companies which is concerned with transporting products between parties. Still, the problem of this sector is that its scale may lead to detainments and defaults in the delivery of goods, as well as other issues. Moreover, large distributors require a large number of workers to meet all the needs of their stores. All this may contribute to big detainments in order processing and increases the potentiality of losing orders. In an attempt to break this problem, companies have automated all their procedures, contributing to a significant augmentation in the number of businesses and distributors in the logistics sector. Hence, blockchain technology and smart contracted legal agreements seem to be suitable concepts to redesign and optimize collaborative business processes and supply chains. The main purpose of this paper is to examine the scope of blockchain technology and smart contracts in the field of logistics and supply chain management. This study discusses the research question of how and to which extent smart contracts and blockchain technology can facilitate and improve the implementation of collaborative business structures for sustainable entrepreneurial activities in smart supply chains. The intention is to provide a comprehensive overview of the existing research on the use of smart contracts in logistics and supply chain management and to identify any gaps or limitations in the current knowledge on this topic. This review aims to provide a summary and evaluation of the key findings and themes that emerge from the research, as well as to suggest potential directions for future research on the use of smart contracts in logistics and supply chain management.Keywords: smart contracts, smart logistics, smart supply chain management, blockchain and smart contracts in logistics, smart contracts for controlling supply chain management
Procedia PDF Downloads 95387 Understanding the Coping Experience of Mothers with Childhood Trauma Histories: A Qualitative Study
Authors: Chan Yan Nok
Abstract:
The present study is a qualitative study based on the coping experiences of six Hong Kong Chinese mothers who had childhood trauma from their first-person perspective. Expanding the perspective beyond the dominant discourse of “inter-generation transmission of trauma”, this study explores the experiences and meanings of child trauma embedded in their narratives through the process of thematic analysis and narrative analysis. The interviewees painted a nuanced picture of their process of coping and trauma resolution. First, acknowledgement; second, feel safe and start to tell the story of trauma; third, feel the feelings and expression of emotions; fourth, clarifying and coping with the impacts of trauma; fifth, integration and transformation; and sixth, using their new understanding of experience to have a better life. It was seen that there was no “end” within the process of trauma resolution. Instead, this is an ongoing process with positive healing trajectory. Analysis of the stories of the mothers revealed recurrent themes around continuous self-reflective awareness in the process of trauma coping. Rather than being necessarily negative and detrimental, childhood trauma could highlight the meanings of being a mother and reveal opportunities for continuous personal growth and self-enhancement. Utilizing the sense of inadequacy as a core driver in the trauma recovery process while developing a heightened awareness of the unfinished business embedded in their “automatic pattern” of behaviors, emotions, and thoughts can help these mothers become more flexible to formulate new methods in facing future predicaments. Future social work and parent education practices should help mothers deal with unresolved trauma, make sense of their impacts of childhood trauma and discover the growth embedded in the past traumatic experience. They should be facilitated in “acknowledging the reality of the trauma”, including understanding their complicated emotions arising from the traumatic experiences and voicing their struggles. In addition, helping these mothers to be aware of short-term and long-term trauma impacts (i.e., secondary responses to the trauma) and explore their effective coping strategies in “overcoming secondary responses to the trauma” are crucial for their future positive adjustment and transformation. Through affirming their coping abilities and lessons learnt from past experiences, mothers can reduce feelings of shame and powerlessness and enhance their parental capacity.Keywords: childhood trauma, coping, mothers, self-awareness, self-reflection, trauma resolution
Procedia PDF Downloads 165386 Control Performance Simulation and Analysis for Microgravity Vibration Isolation System Onboard Chinese Space Station
Authors: Wei Liu, Shuquan Wang, Yang Gao
Abstract:
Microgravity Science Experiment Rack (MSER) will be onboard TianHe (TH) spacecraft planned to be launched in 2018. TH is one module of Chinese Space Station. Microgravity Vibration Isolation System (MVIS), which is MSER’s core part, is used to isolate disturbance from TH and provide high-level microgravity for science experiment payload. MVIS is two stage vibration isolation system, consisting of Follow Unit (FU) and Experiment Support Unit (ESU). FU is linked to MSER by umbilical cables, and ESU suspends within FU and without physical connection. The FU’s position and attitude relative to TH is measured by binocular vision measuring system, and the acceleration and angular velocity is measured by accelerometers and gyroscopes. Air-jet thrusters are used to generate force and moment to control FU’s motion. Measurement module on ESU contains a set of Position-Sense-Detectors (PSD) sensing the ESU’s position and attitude relative to FU, accelerometers and gyroscopes sensing ESU’s acceleration and angular velocity. Electro-magnetic actuators are used to control ESU’s motion. Firstly, the linearized equations of FU’s motion relative to TH and ESU’s motion relative to FU are derived, laying the foundation for control system design and simulation analysis. Subsequently, two control schemes are proposed. One control scheme is that ESU tracks FU and FU tracks TH, shorten as E-F-T. The other one is that FU tracks ESU and ESU tracks TH, shorten as F-E-T. In addition, motion spaces are constrained within ±15 mm、±2° between FU and ESU, and within ±300 mm between FU and TH or between ESU and TH. A Proportional-Integrate-Differentiate (PID) controller is designed to control FU’s position and attitude. ESU’s controller includes an acceleration feedback loop and a relative position feedback loop. A Proportional-Integrate (PI) controller is designed in the acceleration feedback loop to reduce the ESU’s acceleration level, and a PID controller in the relative position feedback loop is used to avoid collision. Finally, simulations of E-F-T and F-E-T are performed considering variety uncertainties, disturbances and motion space constrains. The simulation results of E-T-H showed that control performance was from 0 to -20 dB for vibration frequency from 0.01 to 0.1 Hz, and vibration was attenuated 40 dB per ten octave above 0.1Hz. The simulation results of T-E-H showed that vibration was attenuated 20 dB per ten octave at the beginning of 0.01Hz.Keywords: microgravity science experiment rack, microgravity vibration isolation system, PID control, vibration isolation performance
Procedia PDF Downloads 160385 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves
Procedia PDF Downloads 88384 Impact of Electric Vehicles on Energy Consumption and Environment
Authors: Amela Ajanovic, Reinhard Haas
Abstract:
Electric vehicles (EVs) are considered as an important means to cope with current environmental problems in transport. However, their high capital costs and limited driving ranges state major barriers to a broader market penetration. The core objective of this paper is to investigate the future market prospects of various types of EVs from an economic and ecological point of view. Our method of approach is based on the calculation of total cost of ownership of EVs in comparison to conventional cars and a life-cycle approach to assess the environmental benignity. The most crucial parameters in this context are km driven per year, depreciation time of the car and interest rate. The analysis of future prospects it is based on technological learning regarding investment costs of batteries. The major results are the major disadvantages of battery electric vehicles (BEVs) are the high capital costs, mainly due to the battery, and a low driving range in comparison to conventional vehicles. These problems could be reduced with plug-in hybrids (PHEV) and range extenders (REXs). However, these technologies have lower CO₂ emissions in the whole energy supply chain than conventional vehicles, but unlike BEV they are not zero-emission vehicles at the point of use. The number of km driven has a higher impact on total mobility costs than the learning rate. Hence, the use of EVs as taxis and in car-sharing leads to the best economic performance. The most popular EVs are currently full hybrid EVs. They have only slightly higher costs and similar operating ranges as conventional vehicles. But since they are dependent on fossil fuels, they can only be seen as energy efficiency measure. However, they can serve as a bridging technology, as long as BEVs and fuel cell vehicle do not gain high popularity, and together with PHEVs and REX contribute to faster technological learning and reduction in battery costs. Regarding the promotion of EVs, the best results could be reached with a combination of monetary and non-monetary incentives, as in Norway for example. The major conclusion is that to harvest the full environmental benefits of EVs a very important aspect is the introduction of CO₂-based fuel taxes. This should ensure that the electricity for EVs is generated from renewable energy sources; otherwise, total CO₂ emissions are likely higher than those of conventional cars.Keywords: costs, mobility, policy, sustainability,
Procedia PDF Downloads 225383 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 196