Search results for: leading digit rule
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3743

Search results for: leading digit rule

3263 The Duty of Sea Carrier to Transship the Cargo in Case of Vessel Breakdown

Authors: Mojtaba Eshraghi Arani

Abstract:

Concluding the contract for carriage of cargo with the shipper (through bill of lading or charterparty), the carrier must transport the cargo from loading port to the port of discharge and deliver it to the consignee. Unless otherwise agreed in the contract, the carrier must avoid from any deviation, transfer of cargo to another vessel or unreasonable stoppage of carriage in-transit. However, the vessel might break down in-transit for any reason and becomes unable to continue its voyage to the port of discharge. This is a frequent incident in the carriage of goods by sea which leads to important dispute between the carrier/owner and the shipper/charterer (hereinafter called “cargo interests”). It is a generally accepted rule that in such event, the carrier/owner must repair the vessel after which it will continue its voyage to the destination port. The dispute will arise in the case that temporary repair of the vessel cannot be done in the short or reasonable term. There are two options for the contract parties in such a case: First, the carrier/owner is entitled to repair the vessel while having the cargo onboard or discharged in the port of refugee, and the cargo interests must wait till the breakdown is rectified at any time, whenever. Second, the carrier/owner will be responsible to charter another vessel and transfer the entirety of cargo to the substitute vessel. In fact, the main question revolves around the duty of carrier/owner to perform transfer of cargo to another vessel. Such operation which is called “trans-shipment” or “transhipment” (in terms of the oil industry it is usually called “ship-to-ship” or “STS”) needs to be done carefully and with due diligence. In fact, the transshipment operation for various cargoes might be different as each cargo requires its own suitable equipment for transfer to another vessel, so this operation is often costly. Moreover, there is a considerable risk of collision between two vessels in particular in bulk carriers. Bulk cargo is also exposed to the shortage and partial loss in the process of transshipment especially during bad weather. Concerning tankers which carry oil and petrochemical products, transshipment, is most probably followed by sea pollution. On the grounds of the above consequences, the owners are afraid of being held responsible for such operation and are reluctant to perform in the relevant disputes. The main argument raised by them is that no regulation has recognized such duty upon their shoulders so any such operation must be done under the auspices of the cargo interests and all costs must be reimbursed by themselves. Unfortunately, not only the international conventions including Hague rules, Hague-Visby Rules, Hamburg rules and Rotterdam rules but also most domestic laws are silent in this regard. The doctrine has yet to analyse the issue and no legal researches was found out in this regard. A qualitative method with the concept of interpretation of data collection has been used in this paper. The source of the data is the analysis of regulations and cases. It is argued in this article that the paramount rule in the maritime law is “the accomplishment of the voyage” by the carrier/owner in view of which, if the voyage can only be finished by transshipment, then the carrier/owner will be responsible to carry out this operation. The duty of carrier/owner to apply “due diligence” will strengthen this reasoning. Any and all costs and expenses will also be on the account pf the owner/carrier, unless the incident is attributable to any cause arising from the cargo interests’ negligence.

Keywords: cargo, STS, transshipment, vessel, voyage

Procedia PDF Downloads 105
3262 Outcome of Comparison between Partial Thickness Skin Graft Harvesting from Scalp and Lower Limb for Scalp Defect: A Clinical Trial Study

Authors: Mahdi Eskandarlou, Mehrdad Taghipour

Abstract:

Background: Partial-thickness skin graft is the cornerstone for scalp defect repair. Routine donor sites include abdomen, thighs, and buttocks. Given the potential side effects following harvesting from these sites and the potential advantages of harvesting from scalp (broad surface, rapid healing, and better cosmetics results), this study is trying to compare the outcomes of graft harvesting from scalp and lower limb. Methods: This clinical trial is conducted among a sample number of 40 partial thickness graft candidates (20 case and 20 control group) with scalp defect presenting to plastic surgery clinic at Besat Hospital during the time period between 2018 and 2019. Sampling was done by simple randomization using random digit table. Data gathering was performed using a designated checklist. The donor site in case group and control group was scalp and lower limb, respectively. The resultant data were analyzed using chi-squared and t-test and SPPS version 21 (SPSS Statistics for Windows, Version 21.0. Armonk, NY: IBM Corp). Results: Of the total 40 patients participating in this study, 28 patients (70%) were male, and 12 (30%) were female with and mean age of 63.62 ± 09.73 years. Hypertension and diabetes mellitus were the most common comorbidities among patients with basal cell carcinoma (BCC) and trauma being the most common etiology for the defects. There was a statistically meaningful relationship between two groups regarding the etiology of defect (P=0.02). The most common anatomic location of defect for case and control groups was temporal and parietal, respectively. Most of the defects were deep to galea zone. The mean diameter of defect was 24.28 ± 45.37 mm for all of the patients. The difference between diameter of defect in both groups was statistically meaningful, while no such difference between graft diameter was seen. The graft 'Take' was completely successful in both groups according to evaluations. The level of postoperative pain was lower in the case group compared to the control according to VAS scale, and the satisfaction was higher in them per Likert scale. Conclusion: Scalp can safely be used as donor site for skin graft to be used for scalp defects, which is associated with better results and lower complication rates compared to other donor sites.

Keywords: donor site, leg, partial-thickness graft, scalp

Procedia PDF Downloads 137
3261 Cost-Effective, Accuracy Preserving Scalar Characterization for mmWave Transceivers

Authors: Mohammad Salah Abdullatif, Salam Hajjar, Paul Khanna

Abstract:

The development of instrument grade mmWave transceivers comes with many challenges. A general rule of thumb is that the performance of the instrument must be higher than the performance of the unit under test in terms of accuracy and stability. The calibration and characterizing of mmWave transceivers are important pillars for testing commercial products. Using a Vector Network Analyzer (VNA) with a mixer option has proven a high performance as an approach to calibrate mmWave transceivers. However, this approach comes with a high cost. In this work, a reduced-cost method to calibrate mmWave transceivers is proposed. A comparison between the proposed method and the VNA technology is provided. A demonstration of significant challenges is discussed, and an approach to meet the requirements is proposed.

Keywords: mmWave transceiver, scalar characterization, coupler connection, magic tee connection, calibration, VNA, vector network analyzer

Procedia PDF Downloads 97
3260 The Incident of Concussion across Popular American Youth Sports: A Retrospective Review

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin H. McCleery

Abstract:

Introduction: A leading cause of emergency room visits among youth (in the United States), is sports-related traumatic brain injuries. Mild traumatic brain injuries (mTBIs), also called concussions, are caused by linear and/or angular acceleration experienced at the head and represent an increasing societal burden. Due to the developing nature of the brain in youth, there is a great risk for long-term neuropsychological deficiencies following a concussion. Accordingly, the purpose of this paper is to investigate incidence rates of concussion across gender for the five most common youth sports in the United States. These include basketball, track and field, soccer, baseball (boys), softball (girls), football (boys), and volleyball (girls). Methods: A PubMed search was performed for four search themes combined. The first theme identified the outcomes (concussion, brain injuries, mild traumatic brain injury, etc.). The second theme identified the sport (American football, soccer, basketball, softball, volleyball, track, and field, etc.). The third theme identified the population (adolescence, children, youth, boys, girls). The last theme identified the study design (prevalence, frequency, incidence, prospective). Ultimately, 473 studies were surveyed, with 15 fulfilling the criteria: prospective study presenting original data and incidence of concussion in the relevant youth sport. The following data were extracted from the selected studies: population age, total study population, total athletic exposures (AE) and incidence rate per 1000 athletic exposures (IR/1000). Two One-Way ANOVA and a Tukey’s post hoc test were conducted using SPSS. Results: From the 15 selected studies, statistical analysis revealed the incidence of concussion per 1000 AEs across the considered sports ranged from 0.014 (girl’s track and field) to 0.780 (boy’s football). Average IR/1000 across all sports was 0.483 and 0.268 for boys and girls, respectively; this difference in IR was found to be statistically significant (p=0.013). Tukey’s post hoc test showed that football had significantly higher IR/1000 than boys’ basketball (p=0.022), soccer (p=0.033) and track and field (p=0.026). No statistical difference was found for concussion incidence between girls’ sports. Removal of football was found to lower the IR/1000 for boys without a statistical difference (p=0.101) compared to girls. Discussion: Football was the only sport showing a statistically significant difference in concussion incidence rate relative to other sports (within gender). Males were overall more likely to be concussed than females when football was included (1.8x), whereas concussion was more likely for females when football was excluded. While the significantly higher rate of concussion in football is not surprising because of the nature and rules of the sport, it is concerning that research has shown higher incidence of concussion in practices than games. Interestingly, findings indicate that girls’ sports are more concussive overall when football is removed. This appears to counter the common notion that boys’ sports are more physically taxing and dangerous. Future research should focus on understanding the concussive mechanisms of injury in each sport to enable effective rule changes.

Keywords: gender, football, soccer, traumatic brain injury

Procedia PDF Downloads 133
3259 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework

Authors: Robert Pocklington

Abstract:

Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.

Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language

Procedia PDF Downloads 98
3258 The Doctrine of Military Necessity under Customary International Law: A Breach of International Humanitarian Law

Authors: Uche A. Nnawulezi

Abstract:

This paper examines an essential and complex part of International humanitarian law standards of military necessity. Military necessity is an unpredictable phenomenon. The unpredictability of this regulation likewise originates from the fact that is one of the most fundamental, yet most misjudged and distorted standards of international law of armed conflict. This rule has been censured as essentially wrong in light of its non-compliance with the principles of international humanitarian law in recent past. The author noted in this study that military necessity runs counter to humanitarian exigencies. These have generated debate among researchers for them to propose that for international law to be considered more important, it is indispensable that the procedures and substance of custom be illuminated and made accessible to every one of the individuals who may utilize it or be influenced by it. However, a significant number of analysts have attributed particular weaknesses to this doctrine. This study relied on both primary and secondary sources of data collection. Significantly, the recommendation made in this paper, if completely adopted, shall go a long way in guaranteeing a better application of the principles of international humanitarian law.

Keywords: military necessity, international law, international humanitarian law, customary law

Procedia PDF Downloads 201
3257 A Parallel Poromechanics Finite Element Method (FEM) Model for Reservoir Analyses

Authors: Henrique C. C. Andrade, Ana Beatriz C. G. Silva, Fernando Luiz B. Ribeiro, Samir Maghous, Jose Claudio F. Telles, Eduardo M. R. Fairbairn

Abstract:

The present paper aims at developing a parallel computational model for numerical simulation of poromechanics analyses of heterogeneous reservoirs. In the context of macroscopic poroelastoplasticity, the hydromechanical coupling between the skeleton deformation and the fluid pressure is addressed by means of two constitutive equations. The first state equation relates the stress to skeleton strain and pore pressure, while the second state equation relates the Lagrangian porosity change to skeleton volume strain and pore pressure. A specific algorithm for local plastic integration using a tangent operator is devised. A modified Cam-clay type yield surface with associated plastic flow rule is adopted to account for both contractive and dilative behavior.

Keywords: finite element method, poromechanics, poroplasticity, reservoir analysis

Procedia PDF Downloads 376
3256 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 181
3255 Enteropathogenic Viruses Associated with Acute Gastroenteritis among Under 5-Years Children in Africa: A Systematic Review and Meta-Analysis

Authors: Cornelius Arome Omatola, Ropo Ebenezer Ogunsakin, Anyebe Bernard Onoja, Martin-Luther Oseni Okolo, Joseph Abraham-Oyiguh, Kehinde Charles Mofolorunso, Phoebe Queen Akoh, Omebije Patience Adejo, Joshua Idakwo, Therisa Ojomideju Okeme, Danjuma Muhammed, David Moses Adaji, Sunday Ocholi Samson, Ruth Aminu, Monday Eneojo Akor

Abstract:

Gastroenteritis viruses are the leading etiologic agents of diarrhea in children worldwide. We present data from thirty-three (33) eligible studies published between 2003 and 2023 from African countries bearing the brunt of the virus-associated diarrheal mortality. Random effects meta-analysis with proportion, subgroups, and meta-regression analyses were employed. Overall, rotavirus with estimated pooled prevalence of 31.0% (95% CI 24.0–39.0) predominated in all primary care visits and hospitalizations, followed by norovirus, adenovirus, sapovirus, astrovirus, and aichivirus with pooled prevalence estimated at 15.0% (95% CI 12.0–20.0), 10% (95% CI 6-15), 4.0% (95% CI 2.0–6.0), 4% (95% CI 3-6), and 2.3% (95% CI 1-3), respectively. Predominant rotavirus genotype was G1P[8] (38%), followed by G3P[8] (11.7%), G9P[8] (8.7%), and G2P[4] (7.1%); although, unusual genotypes were also observed, including G3P[6] (2.7%), G8P[6] (1.7%), G1P[6] (1.5%), G10P[8] (0.9%), G8P[4] (0.5%), and G4P[8] (0.4%). The genogroup II norovirus predominated over the genogroup I-associated infections (84.6%, 613/725 vs 14.9%, 108/725), with the GII.4 (79.3%) being the most prevalent circulating genotype. In conclusion, this review showed that rotavirus remains the leading driver of viral diarrhea requiring health care visits and hospitalization among under-five years children in Africa. Thus, improved rotavirus vaccination in the region and surveillance to determine the residual burden of rotavirus and the evolving trend of other enteric viruses are needed for effective control and management of cases.

Keywords: enteric viruses, rotavirus, norovirus, adenovirus, astrovirus, gastroenteritis

Procedia PDF Downloads 72
3254 Redefining Problems and Challenges of Natural Resource Management in Indonesia

Authors: Amalia Zuhra

Abstract:

Indonesia is very rich with its natural resources. Natural resource management becomes a challenge for Indonesia. Improper management will make the natural resources run out and future generations will not be able to enjoy the natural wealth. A good rule of law and proper implementation determines the success of the management of a country's natural resources. This paper examines the need to redefine problems and challenges in the management of natural resources in Indonesia in the context of law. The purpose of this article is to overview the latest issues and challenges in natural resource management and to redefine legal provisions related to environmental management and human rights protection so that the management of natural resources in the present and future will be more sustainable. This paper finds that sustainable management of natural resources is absolutely essential. The aspect of environmental protection and human rights must be elaborated more deeply so that the management of natural resources can be done maximally without harming not only people but also the environment.

Keywords: international environmental law, human rights law, natural resource management, sustainable development

Procedia PDF Downloads 257
3253 Dissolved Gas Analysis Based Regression Rules from Trained ANN for Transformer Fault Diagnosis

Authors: Deepika Bhalla, Raj Kumar Bansal, Hari Om Gupta

Abstract:

Dissolved Gas Analysis (DGA) has been widely used for fault diagnosis in a transformer. Artificial neural networks (ANN) have high accuracy but are regarded as black boxes that are difficult to interpret. For many problems it is desired to extract knowledge from trained neural networks (NN) so that the user can gain a better understanding of the solution arrived by the NN. This paper applies a pedagogical approach for rule extraction from function approximating neural networks (REFANN) with application to incipient fault diagnosis using the concentrations of the dissolved gases within the transformer oil, as the input to the NN. The input space is split into subregions and for each subregion there is a linear equation that is used to predict the type of fault developing within a transformer. The experiments on real data indicate that the approach used can extract simple and useful rules and give fault predictions that match the actual fault and are at times also better than those predicted by the IEC method.

Keywords: artificial neural networks, dissolved gas analysis, rules extraction, transformer

Procedia PDF Downloads 523
3252 Development Process and Design Methods for Shared Spaces in Europe

Authors: Kazuyasu Yoshino, Keita Yamaguchi, Toshihiko Nishimura, Masashi Kawasaki

Abstract:

Shared Space, the planning and design concept that allows pedestrians and vehicles to coexist in a street space, has been advocated and developed according to the traffic conditions in each country in Europe. Especially in German/French-speaking countries, the "Meeting Zone," which is a traffic rule combining speed regulation (20km/h) and pedestrian priority, is often applied when designing shared spaces at intersections, squares, and streets in the city center. In this study, the process of establishment and development of the Meeting Zone in Switzerland, France, and Austria was chronologically organized based on the descriptions in the major discourse and guidelines in each country. Then, the characteristics of the spatial design were extracted by analyzing representative examples of Meeting Zone applications. Finally, the relationships between the different approaches to designing of Meeting Zone and traffic regulations in different countries were discussed.

Keywords: shared space, traffic calming, meeting zone, street design

Procedia PDF Downloads 73
3251 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 128
3250 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology

Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum

Abstract:

According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.

Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics

Procedia PDF Downloads 312
3249 New Approach for Load Modeling

Authors: Slim Chokri

Abstract:

Load forecasting is one of the central functions in power systems operations. Electricity cannot be stored, which means that for electric utility, the estimate of the future demand is necessary in managing the production and purchasing in an economically reasonable way. A majority of the recently reported approaches are based on neural network. The attraction of the methods lies in the assumption that neural networks are able to learn properties of the load. However, the development of the methods is not finished, and the lack of comparative results on different model variations is a problem. This paper presents a new approach in order to predict the Tunisia daily peak load. The proposed method employs a computational intelligence scheme based on the Fuzzy neural network (FNN) and support vector regression (SVR). Experimental results obtained indicate that our proposed FNN-SVR technique gives significantly good prediction accuracy compared to some classical techniques.

Keywords: neural network, load forecasting, fuzzy inference, machine learning, fuzzy modeling and rule extraction, support vector regression

Procedia PDF Downloads 423
3248 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 123
3247 Adopting a New Policy in Maritime Law for Protecting Ship Mortgagees Against Maritime Liens

Authors: Mojtaba Eshraghi Arani

Abstract:

Ship financing is the vital element in the development of shipping industry because while the ship constitutes the owners’ main asset, she is considered a reliable security in the financiers’ viewpoint as well. However, it is most probable that a financier who has accepted a ship as security will face many creditors who are privileged and rank before him for collecting, out of the ship, the money that they are owed. In fact, according to the current rule of maritime law, which was established by “Convention Internationale pour l’Unification de Certaines Règles Relatives aux Privilèges et Hypothèques Maritimes, Brussels, 10 April 1926”, the mortgages, hypotheques, and other charges on vessels rank after several secured claims referred to as “maritime liens”. Such maritime liens are an exhaustive list of claims including but not limited to “expenses incurred in the common interest of the creditors to preserve the vessel or to procure its sale and the distribution of the proceeds of sale”, “tonnage dues, light or harbour dues, and other public taxes and charges of the same character”, “claims arising out of the contract of engagement of the master, crew and other persons hired on board”, “remuneration for assistance and salvage”, “the contribution of the vessel in general average”, “indemnities for collision or other damage caused to works forming part of harbours, docks, etc,” “indemnities for personal injury to passengers or crew or for loss of or damage to cargo”, “claims resulting form contracts entered into or acts done by the master”. The same rule survived with only some minor change in the categories of maritime liens in the substitute conventions 1967 and 1993. The status que in maritime law have always been considered as a major obstacle to the development of shipping market and has inevitably led to increase in the interest rates and other related costs of ship financing. It seems that the national and international policy makers have yet to change their mind being worried about the deviation from the old marine traditions. However, it is crystal clear that the continuation of status que will harm, to a great extent, the shipowners and, consequently, the international merchants as a whole. It is argued in this article that the raison d'être for many categories of maritime liens cease to exist anymore, in view of which, the international community has to recognize only a minimum category of maritime liens which are created in the common interests of all creditors; to this effect, only two category of “compensation due for the salvage of ship” and “extraordinary expenses indispensable for the preservation of the ship” can be declared as taking priority over the mortgagee rights, in anology with the Geneva Convention on the International Recognition of Rights in Aircrafts (1948). A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and domestic laws.

Keywords: ship finance, mortgage, maritime liens, brussels convenion, geneva convention 1948

Procedia PDF Downloads 58
3246 Nanoparticles Activated Inflammasome Lead to Airway Hyperresponsiveness and Inflammation in a Mouse Model of Asthma

Authors: Pureun-Haneul Lee, Byeong-Gon Kim, Sun-Hye Lee, An-Soo Jang

Abstract:

Background: Nanoparticles may pose adverse health effects due to particulate matter inhalation. Nanoparticle exposure induces cell and tissue damage, causing local and systemic inflammatory responses. The inflammasome is a major regulator of inflammation through its activation of pro-caspase-1, which cleaves pro-interleukin-1β (IL-1β) into its mature form and may signal acute and chronic immune responses to nanoparticles. Objective: The aim of the study was to identify whether nanoparticles exaggerates inflammasome pathway leading to airway inflammation and hyperresponsiveness in an allergic mice model of asthma. Methods: Mice were treated with saline (sham), OVA-sensitized and challenged (OVA), or titanium dioxide nanoparticles. Lung interleukin 1 beta (IL-1β), interleukin 18 (IL-18), NACHT, LRR and PYD domains-containing protein 3 (NLRP3) and caspase-1 levels were assessed with Western Blot. Caspase-1 was checked by immunohistochemical staining. Reactive oxygen species were measured for the marker 8-isoprostane and carbonyl by ELISA. Results: Airway inflammation and hyperresponsiveness increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. TiO2 nanoparticles treatment increased IL-1β and IL-18 protein expression in OVA-sensitized/challenged mice. TiO2 nanoparticles augmented the expression of NLRP3 and caspase-1 leading to the formation of an active caspase-1 in the lung. Lung caspase-1 expression was increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. Reactive oxygen species was increased in OVA-sensitized/challenged mice and in OVA-sensitized/challenged plus TiO2 exposed mice. Conclusion: Our data demonstrate that inflammasome pathway activates in asthmatic lungs following nanoparticles exposure, suggesting that targeting the inflammasome may help control nanoparticles-induced airway inflammation and responsiveness.

Keywords: bronchial asthma, inflammation, inflammasome, nanoparticles

Procedia PDF Downloads 360
3245 Coumestrol Induced Apoptosis in Breast Cancer MCF-7 Cells via Redox Cycling of Copper and ROS Generation: Implications of Copper Chelation Strategy in Cancer Treatment

Authors: Atif Zafar Khan, Swarnendra Singh, Imrana Naseem

Abstract:

Breast cancer is one of the most frequent malignancies in women worldwide and a leading cause of cancer-related deaths among women. Therefore, there is a need to identify new chemotherapeutic strategies for cancer treatment. Unlike normal cells, cancer cells contain elevated copper levels which play an integral role in angiogenesis. Copper is an important metal ion associated with the chromatin DNA, particularly with guanine. Thus, targeting copper via copper-specific chelators in cancer cells can serve as effective anticancer strategy. Keeping in view these facts, we evaluated the anticancer activity and copper-dependent cytotoxic effect of coumestrol (phytoestrogen in soybean products) in breast cancer MCF-7 cells. Coumestrol inhibited proliferation and induced apoptosis in MCF-7 cells, which was prevented by copper chelator neocuproine and ROS scavengers. Coumestrol treatment induced ROS generation coupled to DNA fragmentation, up-regulation of p53/p21, cell cycle arrest at G1/S phase, mitochondrial membrane depolarization and caspases 9/3 activation. All these effects were suppressed by ROS scavengers and neocuproine. These results suggest that coumestrol targets elevated copper for redox cycling to generate ROS leading to DNA fragmentation. DNA damage leads to p53 up-regulation which directs the cell cycle arrest at G1/S phase and promotes caspase-dependent apoptosis of MCF-7 cells. In conclusion, coumestrol induces pro-oxidant cell death by chelating cellular copper to produce copper-coumestrol complexes that engages in redox cycling in breast cancer cells. Thus, targeting elevated copper levels might be a potential therapeutic strategy for selective cytotoxic action against malignant cells.

Keywords: apoptosis, breast cancer, copper chelation, coumestrol, reactive oxygens species, redox cycling

Procedia PDF Downloads 235
3244 The Impact of the Use of Some Multiple Intelligence-Based Teaching Strategies on Developing Moral Intelligence and Inferential Jurisprudential Thinking among Secondary School Female Students in Saudi Arabia

Authors: Sameerah A. Al-Hariri Al-Zahrani

Abstract:

The current study aims at getting acquainted with the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking among secondary school female students. The study has endeavored to answer the following questions: What is the impact of the use of some multiple intelligence-based teaching strategies on developing inferential jurisprudential thinking and moral intelligence among first-year secondary school female students? In the frame of this main research question, the study seeks to answer the following sub-questions: (i) What are the inferential jurisprudential thinking skills among first-year secondary school female students? (ii) What are the components of moral intelligence among first year secondary school female students? (iii) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on moral intelligence among first-year secondary school female students? (iv) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on developing the capacity for inferential jurisprudential thinking of juristic rules among first-year secondary school female students? The study has used the descriptive-analytical methodology in surveying, analyzing, and reviewing the literature on previous studies in order to benefit from them in building the tools of the study and the materials of experimental treatment. The study has also used the experimental method to study the impact of the independent variable (multiple intelligence strategies) on the two dependent variables (moral intelligence and inferential jurisprudential thinking) in first-year secondary school female students’ learning. The sample of the study is made up of 70 female students that have been divided into two groups: an experimental group consisting of 35 students who have been taught through multiple intelligence strategies, and a control group consisting of the other 35 students who have been taught normally. The two tools of the study (inferential jurisprudential thinking test and moral intelligence scale) have been implemented on the two groups as a pre-test. The female researcher taught the experimental group and implemented the two tools of the study. After the experiment, which lasted eight weeks, was over, the study showed the following results: (i) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the inferential jurisprudential thinking test (recognition of the evidence of jurisprudential rule, recognition of the motive for the jurisprudential rule, jurisprudential inferencing, analogical jurisprudence) in favor of the experimental group. (ii) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the components of the moral intelligence scale (sympathy, conscience, moral wisdom, tolerance, justice, respect) in favor of the experimental group. The study has, thus, demonstrated the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking.

Keywords: moral intelligence, teaching, inferential jurisprudential thinking, secondary school

Procedia PDF Downloads 151
3243 Resume Ranking Using Custom Word2vec and Rule-Based Natural Language Processing Techniques

Authors: Subodh Chandra Shakya, Rajendra Sapkota, Aakash Tamang, Shushant Pudasaini, Sujan Adhikari, Sajjan Adhikari

Abstract:

Lots of efforts have been made in order to measure the semantic similarity between the text corpora in the documents. Techniques have been evolved to measure the similarity of two documents. One such state-of-art technique in the field of Natural Language Processing (NLP) is word to vector models, which converts the words into their word-embedding and measures the similarity between the vectors. We found this to be quite useful for the task of resume ranking. So, this research paper is the implementation of the word2vec model along with other Natural Language Processing techniques in order to rank the resumes for the particular job description so as to automate the process of hiring. The research paper proposes the system and the findings that were made during the process of building the system.

Keywords: chunking, document similarity, information extraction, natural language processing, word2vec, word embedding

Procedia PDF Downloads 141
3242 COVID-19 Genomic Analysis and Complete Evaluation

Authors: Narin Salehiyan, Ramin Ghasemi Shayan

Abstract:

In order to investigate coronavirus RNA replication, transcription, recombination, protein processing and transport, virion assembly, the identification of coronavirus-specific cell receptors, and polymerase processing, the manipulation of coronavirus clones and complementary DNAs (cDNAs) of defective-interfering (DI) RNAs is the subject of this chapter. The idea of the Covid genome is nonsegmented, single-abandoned, and positive-sense RNA. When compared to other RNA viruses, its size is significantly greater, ranging from 27 to 32 kb. The quality encoding the enormous surface glycoprotein depends on 4.4 kb, encoding a forcing trimeric, profoundly glycosylated protein. This takes off exactly 20 nm over the virion envelope, giving the infection the appearance-with a little creative mind of a crown or coronet. Covid research has added to the comprehension of numerous parts of atomic science as a general rule, like the component of RNA union, translational control, and protein transport and handling. It stays a fortune equipped for creating startling experiences.

Keywords: covid-19, corona, virus, genome, genetic

Procedia PDF Downloads 60
3241 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery

Abstract:

Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465

Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI

Procedia PDF Downloads 215
3240 Thermal Expansion Coefficient and Young’s Modulus of Silica-Reinforced Epoxy Composite

Authors: Hyu Sang Jo, Gyo Woo Lee

Abstract:

In this study, the evaluation of thermal stability of the micrometer-sized silica particle reinforced epoxy composite was carried out through the measurement of thermal expansion coefficient and Young’s modulus of the specimens. For all the specimens in this study from the baseline to those containing 50 wt% silica filler, the thermal expansion coefficients and the Young’s moduli were gradually decreased down to 20% and increased up to 41%, respectively. The experimental results were compared with filler-volume-based simple empirical relations. The experimental results of thermal expansion coefficients correspond with those of Thomas’s model which is modified from the rule of mixture. However, the measured result for Young’s modulus tends to be increased slightly. The differences in increments of the moduli between experimental and numerical model data are quite large.

Keywords: thermal stability, silica-reinforced, epoxy composite, coefficient of thermal expansion, empirical model

Procedia PDF Downloads 284
3239 Vibration Control of a Functionally Graded Carbon Nanotube-Reinforced Composites Beam Resting on Elastic Foundation

Authors: Gholamhosein Khosravi, Mohammad Azadi, Hamidreza Ghezavati

Abstract:

In this paper, vibration of a nonlinear composite beam is analyzed and then an active controller is used to control the vibrations of the system. The beam is resting on a Winkler-Pasternak elastic foundation. The composite beam is reinforced by single walled carbon nanotubes. Using the rule of mixture, the material properties of functionally graded carbon nanotube-reinforced composites (FG-CNTRCs) are determined. The beam is cantilever and the free end of the beam is under follower force. Piezoelectric layers are attached to the both sides of the beam to control vibrations as sensors and actuators. The governing equations of the FG-CNTRC beam are derived based on Euler-Bernoulli beam theory Lagrange- Rayleigh-Ritz method. The simulation results are presented and the effects of some parameters on stability of the beam are analyzed.

Keywords: carbon nanotubes, vibration control, piezoelectric layers, elastic foundation

Procedia PDF Downloads 257
3238 Optimizing Microgrid Operations: A Framework of Adaptive Model Predictive Control

Authors: Ruben Lopez-Rodriguez

Abstract:

In a microgrid, diverse energy sources (both renewable and non-renewable) are combined with energy storage units to form a localized power system. Microgrids function as independent entities, capable of meeting the energy needs of specific areas or communities. This paper introduces a Model Predictive Control (MPC) approach tailored for grid-connected microgrids, aiming to optimize their operation. The formulation employs Mixed-Integer Programming (MIP) to find optimal trajectories. This entails the fulfillment of continuous and binary constraints, all while accounting for commutations between various operating conditions such as storage unit charge/discharge, import/export from/towards the main grid, as well as asset connection/disconnection. To validate the proposed approach, a microgrid case study is conducted, and the simulation results are compared with those obtained using a rule-based strategy.

Keywords: microgrids, mixed logical dynamical systems, mixed-integer optimization, model predictive control

Procedia PDF Downloads 34
3237 Systems and Procedures in Indonesian Administrative Law

Authors: Andhika Danesjvara

Abstract:

Governance of the Republic of Indonesia should be based on the principle of sovereignty and the rule of law. Based on these principles, all forms of decisions and/or actions of government administration should be based on the sovereignty of the people and the law. Decisions and/or actions for citizens should be based on the provisions of the legislation and the general principles of good governance. Control of the decisions and/or actions is a part of administrative review and also judicial control. The control is part of the administrative justice system, which is intended for people affected by the decisions or administrative actions. This control is the duty and authority of the government or independent administrative court. Therefore, systems and procedures for the implementation of the task of governance and development must be regulated by law. Systems and procedures of governance is a subject studied in administrative law, therefore, the research also includes a review of the principles of law in administrative law. The administrative law procedure is important for the government to make decisions, the question is whether the procedures are part of the justice system itself.

Keywords: administrative court, administrative justice, administrative law, administrative procedures

Procedia PDF Downloads 267
3236 Effect of Cryogenic Pre-stretching on the Room Temperature Tensile Behavior of AZ61 Magnesium Alloy and Dominant Grain Growth Mechanisms During Subsequent Annealing

Authors: Umer Masood Chaudry, Hafiz Muhammad Rehan Tariq, Chung-soo Kim, Tea-sung Jun

Abstract:

This study explored the influence of pre-stretching temperature on the microstructural characteristics and deformation behavior of AZ61 magnesium alloy and its implications on grain growth during subsequent annealing. AZ61 alloy was stretched to 5% plastic strain along rolling (RD) and transverse direction (TD) at room (RT) and cryogenic temperature (-150 oC, CT) followed by annealing at 320 oC for 1 h to investigate the twinning and dislocation evolution and its consequent effect on the flow stress, plastic strain and strain hardening rate. Compared to RT-stretched samples, significant improvement in yield stress, strain hardening rate and moderate reduction in elongation to failure were witnessed for CT-stretched samples along RD and TD. The subsequent EBSD analysis revealed the increased fraction of fine {10-12} twins and nucleation of multiple {10-12} twin variants caused by higher local stress concentration at the grain boundaries in CT-stretched samples as manifested by the kernel average misorientation. This higher twin fraction and twin-twin interaction imposed the strengthening by restricting the mean free path of dislocations, leading to higher flow stress and strain hardening rate. During annealing of the RT/CT-stretched samples, the residual strain energy and twin boundaries were decreased due to static recovery, leading to a coarse-grained twin-free microstructure. Strain induced boundary migration (SBIM) was found to be the predominant mechanism governing the grain growth during annealing via movement of high angle grain boundaries.

Keywords: magnesium, twinning, twinning variant selection, EBSD, cryogenic deformation

Procedia PDF Downloads 58
3235 Analysis and Rule Extraction of Coronary Artery Disease Data Using Data Mining

Authors: Rezaei Hachesu Peyman, Oliyaee Azadeh, Salahzadeh Zahra, Alizadeh Somayyeh, Safaei Naser

Abstract:

Coronary Artery Disease (CAD) is one major cause of disability in adults and one main cause of death in developed. In this study, data mining techniques including Decision Trees, Artificial neural networks (ANNs), and Support Vector Machine (SVM) analyze CAD data. Data of 4948 patients who had suffered from heart diseases were included in the analysis. CAD is the target variable, and 24 inputs or predictor variables are used for the classification. The performance of these techniques is compared in terms of sensitivity, specificity, and accuracy. The most significant factor influencing CAD is chest pain. Elderly males (age > 53) have a high probability to be diagnosed with CAD. SVM algorithm is the most useful way for evaluation and prediction of CAD patients as compared to non-CAD ones. Application of data mining techniques in analyzing coronary artery diseases is a good method for investigating the existing relationships between variables.

Keywords: classification, coronary artery disease, data-mining, knowledge discovery, extract

Procedia PDF Downloads 643
3234 Idea of International Criminal Justice in the Function of Prosecution International Crimes

Authors: Vanda Božić, Željko Nikač

Abstract:

The wars and armed conflicts have often resulted in violations of international humanitarian law, and often commit the most serious international crimes such as war crimes, crimes against humanity, aggression and genocide. However, only in the XX century the rule was articulated idea of establishing a body of international criminal justice in order to prosecute these crimes and their perpetrators. The first steps in this field have been made by establishing the International military tribunals for war crimes at Nuremberg and Tokyo, and the formation of ad hoc tribunals for the former Yugoslavia and Rwanda. In the end, The International Criminal Court was established in Rome in 1998 with the aim of justice and in order to give satisfaction the victims of crimes and their families. The aim of the paper was to provide a historical and comparative analysis of the institutions of international criminal justice based on which these institutions de lege lata fulfilled the goals of individual criminal responsibility and justice. Furthermore, the authors suggest de lege ferenda that the Permanent International Criminal Tribunal, in addition to the prospective case, also takes over the current ICTY and ICTR cases.

Keywords: international crimes, international criminal justice, prosecution of crimes, ad hoc tribunal, the international criminal court

Procedia PDF Downloads 261